00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3698 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3299 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.224 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.224 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.864 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.876 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.889 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.889 > git config core.sparsecheckout # timeout=10 00:00:05.900 > git read-tree -mu HEAD # timeout=10 00:00:05.917 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.941 Commit message: "packer: Add bios builder" 00:00:05.942 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.051 [Pipeline] Start of Pipeline 00:00:06.064 [Pipeline] library 00:00:06.066 Loading library shm_lib@master 00:00:06.066 Library shm_lib@master is cached. Copying from home. 00:00:06.083 [Pipeline] node 00:00:06.093 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.095 [Pipeline] { 00:00:06.108 [Pipeline] catchError 00:00:06.109 [Pipeline] { 00:00:06.125 [Pipeline] wrap 00:00:06.137 [Pipeline] { 00:00:06.148 [Pipeline] stage 00:00:06.150 [Pipeline] { (Prologue) 00:00:06.355 [Pipeline] sh 00:00:06.643 + logger -p user.info -t JENKINS-CI 00:00:06.659 [Pipeline] echo 00:00:06.660 Node: GP11 00:00:06.668 [Pipeline] sh 00:00:06.963 [Pipeline] setCustomBuildProperty 00:00:06.972 [Pipeline] echo 00:00:06.973 Cleanup processes 00:00:06.976 [Pipeline] sh 00:00:07.254 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.254 795615 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.267 [Pipeline] sh 00:00:07.552 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.553 ++ grep -v 'sudo pgrep' 00:00:07.553 ++ awk '{print $1}' 00:00:07.553 + sudo kill -9 00:00:07.553 + true 00:00:07.569 [Pipeline] cleanWs 00:00:07.579 [WS-CLEANUP] Deleting project workspace... 00:00:07.579 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.587 [WS-CLEANUP] done 00:00:07.592 [Pipeline] setCustomBuildProperty 00:00:07.609 [Pipeline] sh 00:00:07.892 + sudo git config --global --replace-all safe.directory '*' 00:00:07.979 [Pipeline] httpRequest 00:00:08.026 [Pipeline] echo 00:00:08.028 Sorcerer 10.211.164.101 is alive 00:00:08.037 [Pipeline] httpRequest 00:00:08.042 HttpMethod: GET 00:00:08.042 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.043 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.066 Response Code: HTTP/1.1 200 OK 00:00:08.066 Success: Status code 200 is in the accepted range: 200,404 00:00:08.066 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:25.376 [Pipeline] sh 00:00:25.651 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:25.666 [Pipeline] httpRequest 00:00:25.697 [Pipeline] echo 00:00:25.698 Sorcerer 10.211.164.101 is alive 00:00:25.706 [Pipeline] httpRequest 00:00:25.710 HttpMethod: GET 00:00:25.710 URL: http://10.211.164.101/packages/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:25.711 Sending request to url: http://10.211.164.101/packages/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:25.729 Response Code: HTTP/1.1 200 OK 00:00:25.729 Success: Status code 200 is in the accepted range: 200,404 00:00:25.729 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:53.282 [Pipeline] sh 00:00:53.563 + tar --no-same-owner -xf spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:56.099 [Pipeline] sh 00:00:56.375 + git -C spdk log --oneline -n5 00:00:56.375 cac68eec0 autotest: reduce RAID tests runs 00:00:56.375 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:56.375 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:56.375 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:56.375 d005e023b raid: fix empty slot not updated in sb after resize 00:00:56.392 [Pipeline] withCredentials 00:00:56.401 > git --version # timeout=10 00:00:56.414 > git --version # 'git version 2.39.2' 00:00:56.432 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:56.434 [Pipeline] { 00:00:56.444 [Pipeline] retry 00:00:56.446 [Pipeline] { 00:00:56.463 [Pipeline] sh 00:00:56.746 + git ls-remote http://dpdk.org/git/dpdk main 00:01:06.750 [Pipeline] } 00:01:06.773 [Pipeline] // retry 00:01:06.778 [Pipeline] } 00:01:06.799 [Pipeline] // withCredentials 00:01:06.809 [Pipeline] httpRequest 00:01:06.832 [Pipeline] echo 00:01:06.833 Sorcerer 10.211.164.101 is alive 00:01:06.842 [Pipeline] httpRequest 00:01:06.847 HttpMethod: GET 00:01:06.848 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:06.849 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:06.857 Response Code: HTTP/1.1 200 OK 00:01:06.858 Success: Status code 200 is in the accepted range: 200,404 00:01:06.859 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:18.210 [Pipeline] sh 00:01:18.493 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:19.883 [Pipeline] sh 00:01:20.168 + git -C dpdk log --oneline -n5 00:01:20.168 82c47f005b version: 24.07-rc3 00:01:20.168 d9d1be537e doc: remove reference to mbuf pkt field 00:01:20.168 52c7393a03 doc: set required MinGW version in Windows guide 00:01:20.168 92439dc9ac dts: improve starting and stopping interactive shells 00:01:20.168 2b648cd4e4 dts: add context manager for interactive shells 00:01:20.180 [Pipeline] } 00:01:20.200 [Pipeline] // stage 00:01:20.211 [Pipeline] stage 00:01:20.213 [Pipeline] { (Prepare) 00:01:20.233 [Pipeline] writeFile 00:01:20.250 [Pipeline] sh 00:01:20.533 + logger -p user.info -t JENKINS-CI 00:01:20.551 [Pipeline] sh 00:01:20.876 + logger -p user.info -t JENKINS-CI 00:01:20.889 [Pipeline] sh 00:01:21.175 + cat autorun-spdk.conf 00:01:21.175 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.175 SPDK_TEST_NVMF=1 00:01:21.175 SPDK_TEST_NVME_CLI=1 00:01:21.175 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.175 SPDK_TEST_NVMF_NICS=e810 00:01:21.175 SPDK_TEST_VFIOUSER=1 00:01:21.175 SPDK_RUN_UBSAN=1 00:01:21.175 NET_TYPE=phy 00:01:21.175 SPDK_TEST_NATIVE_DPDK=main 00:01:21.175 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.184 RUN_NIGHTLY=1 00:01:21.188 [Pipeline] readFile 00:01:21.214 [Pipeline] withEnv 00:01:21.216 [Pipeline] { 00:01:21.229 [Pipeline] sh 00:01:21.511 + set -ex 00:01:21.511 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:21.511 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.511 ++ SPDK_TEST_NVMF=1 00:01:21.511 ++ SPDK_TEST_NVME_CLI=1 00:01:21.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.511 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.511 ++ SPDK_TEST_VFIOUSER=1 00:01:21.511 ++ SPDK_RUN_UBSAN=1 00:01:21.511 ++ NET_TYPE=phy 00:01:21.511 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:21.511 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.511 ++ RUN_NIGHTLY=1 00:01:21.511 + case $SPDK_TEST_NVMF_NICS in 00:01:21.511 + DRIVERS=ice 00:01:21.511 + [[ tcp == \r\d\m\a ]] 00:01:21.511 + [[ -n ice ]] 00:01:21.511 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.511 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:21.511 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:21.511 rmmod: ERROR: Module irdma is not currently loaded 00:01:21.511 rmmod: ERROR: Module i40iw is not currently loaded 00:01:21.511 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:21.511 + true 00:01:21.511 + for D in $DRIVERS 00:01:21.511 + sudo modprobe ice 00:01:21.511 + exit 0 00:01:21.521 [Pipeline] } 00:01:21.539 [Pipeline] // withEnv 00:01:21.544 [Pipeline] } 00:01:21.560 [Pipeline] // stage 00:01:21.569 [Pipeline] catchError 00:01:21.571 [Pipeline] { 00:01:21.586 [Pipeline] timeout 00:01:21.586 Timeout set to expire in 50 min 00:01:21.588 [Pipeline] { 00:01:21.603 [Pipeline] stage 00:01:21.605 [Pipeline] { (Tests) 00:01:21.620 [Pipeline] sh 00:01:21.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:21.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:21.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.905 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:21.905 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.905 + source /etc/os-release 00:01:21.905 ++ NAME='Fedora Linux' 00:01:21.905 ++ VERSION='38 (Cloud Edition)' 00:01:21.905 ++ ID=fedora 00:01:21.905 ++ VERSION_ID=38 00:01:21.905 ++ VERSION_CODENAME= 00:01:21.905 ++ PLATFORM_ID=platform:f38 00:01:21.905 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.905 ++ LOGO=fedora-logo-icon 00:01:21.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.905 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.905 ++ SUPPORT_END=2024-05-14 00:01:21.905 ++ VARIANT='Cloud Edition' 00:01:21.905 ++ VARIANT_ID=cloud 00:01:21.905 + uname -a 00:01:21.905 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.905 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.843 Hugepages 00:01:22.843 node hugesize free / total 00:01:22.843 node0 1048576kB 0 / 0 00:01:22.843 node0 2048kB 0 / 0 00:01:22.843 node1 1048576kB 0 / 0 00:01:22.843 node1 2048kB 0 / 0 00:01:22.843 00:01:22.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.843 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:22.843 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:22.843 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:22.843 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:22.843 + rm -f /tmp/spdk-ld-path 00:01:22.843 + source autorun-spdk.conf 00:01:22.843 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.843 ++ SPDK_TEST_NVMF=1 00:01:22.843 ++ SPDK_TEST_NVME_CLI=1 00:01:22.843 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.843 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.843 ++ SPDK_TEST_VFIOUSER=1 00:01:22.843 ++ SPDK_RUN_UBSAN=1 00:01:22.843 ++ NET_TYPE=phy 00:01:22.843 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:22.843 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.843 ++ RUN_NIGHTLY=1 00:01:22.843 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.843 + [[ -n '' ]] 00:01:22.843 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.843 + for M in /var/spdk/build-*-manifest.txt 00:01:22.843 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.843 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.843 + for M in /var/spdk/build-*-manifest.txt 00:01:22.843 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.844 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.844 ++ uname 00:01:22.844 + [[ Linux == \L\i\n\u\x ]] 00:01:22.844 + sudo dmesg -T 00:01:22.844 + sudo dmesg --clear 00:01:22.844 + dmesg_pid=796949 00:01:22.844 + [[ Fedora Linux == FreeBSD ]] 00:01:22.844 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.844 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.844 + sudo dmesg -Tw 00:01:22.844 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.844 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.844 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.844 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.844 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.844 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.844 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.844 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.844 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.844 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.844 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.844 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.844 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.103 Test configuration: 00:01:23.103 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.103 SPDK_TEST_NVMF=1 00:01:23.103 SPDK_TEST_NVME_CLI=1 00:01:23.103 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.103 SPDK_TEST_NVMF_NICS=e810 00:01:23.103 SPDK_TEST_VFIOUSER=1 00:01:23.103 SPDK_RUN_UBSAN=1 00:01:23.103 NET_TYPE=phy 00:01:23.103 SPDK_TEST_NATIVE_DPDK=main 00:01:23.103 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.103 RUN_NIGHTLY=1 02:00:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.103 02:00:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.103 02:00:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.103 02:00:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.103 02:00:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.103 02:00:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.103 02:00:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.103 02:00:51 -- paths/export.sh@5 -- $ export PATH 00:01:23.103 02:00:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.103 02:00:51 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.103 02:00:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:23.103 02:00:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722038451.XXXXXX 00:01:23.103 02:00:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722038451.d78iyi 00:01:23.103 02:00:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:23.103 02:00:51 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:23.103 02:00:51 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.103 02:00:51 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:23.103 02:00:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.103 02:00:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.103 02:00:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:23.103 02:00:51 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:23.103 02:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.103 02:00:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:23.103 02:00:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:23.103 02:00:51 -- pm/common@17 -- $ local monitor 00:01:23.103 02:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.103 02:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.103 02:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.103 02:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.103 02:00:51 -- pm/common@21 -- $ date +%s 00:01:23.103 02:00:51 -- pm/common@21 -- $ date +%s 00:01:23.103 02:00:51 -- pm/common@25 -- $ sleep 1 00:01:23.103 02:00:51 -- pm/common@21 -- $ date +%s 00:01:23.103 02:00:51 -- pm/common@21 -- $ date +%s 00:01:23.103 02:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722038451 00:01:23.103 02:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722038451 00:01:23.103 02:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722038451 00:01:23.103 02:00:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722038451 00:01:23.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722038451_collect-vmstat.pm.log 00:01:23.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722038451_collect-cpu-load.pm.log 00:01:23.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722038451_collect-cpu-temp.pm.log 00:01:23.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722038451_collect-bmc-pm.bmc.pm.log 00:01:24.043 02:00:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:24.043 02:00:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.043 02:00:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.043 02:00:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.043 02:00:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.043 Sat Jul 27 12:00:52 AM UTC 2024 00:01:24.043 02:00:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.043 v24.09-pre-322-gcac68eec0 00:01:24.043 02:00:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.043 02:00:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.043 02:00:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.043 02:00:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:24.043 02:00:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:24.043 02:00:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.043 ************************************ 00:01:24.043 START TEST ubsan 00:01:24.043 ************************************ 00:01:24.043 02:00:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:24.043 using ubsan 00:01:24.043 00:01:24.043 real 0m0.000s 00:01:24.043 user 0m0.000s 00:01:24.043 sys 0m0.000s 00:01:24.043 02:00:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:24.043 02:00:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.043 ************************************ 00:01:24.043 END TEST ubsan 00:01:24.043 ************************************ 00:01:24.043 02:00:52 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:24.043 02:00:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.043 02:00:52 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.043 02:00:52 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:24.043 02:00:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:24.043 02:00:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.043 ************************************ 00:01:24.043 START TEST build_native_dpdk 00:01:24.043 ************************************ 00:01:24.043 02:00:52 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.043 02:00:52 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.044 82c47f005b version: 24.07-rc3 00:01:24.044 d9d1be537e doc: remove reference to mbuf pkt field 00:01:24.044 52c7393a03 doc: set required MinGW version in Windows guide 00:01:24.044 92439dc9ac dts: improve starting and stopping interactive shells 00:01:24.044 2b648cd4e4 dts: add context manager for interactive shells 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.044 patching file config/rte_config.h 00:01:24.044 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:24.044 02:00:52 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.044 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:24.303 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:24.303 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:24.304 02:00:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:24.304 02:00:52 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:24.304 02:00:52 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:24.304 02:00:52 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:24.304 02:00:52 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.304 02:00:52 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.500 The Meson build system 00:01:28.500 Version: 1.3.1 00:01:28.500 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.500 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:28.500 Build type: native build 00:01:28.500 Program cat found: YES (/usr/bin/cat) 00:01:28.500 Project name: DPDK 00:01:28.500 Project version: 24.07.0-rc3 00:01:28.500 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.500 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:28.500 Host machine cpu family: x86_64 00:01:28.500 Host machine cpu: x86_64 00:01:28.500 Message: ## Building in Developer Mode ## 00:01:28.500 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.500 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:28.500 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.500 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:28.500 Program cat found: YES (/usr/bin/cat) 00:01:28.501 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:28.501 Compiler for C supports arguments -march=native: YES 00:01:28.501 Checking for size of "void *" : 8 00:01:28.501 Checking for size of "void *" : 8 (cached) 00:01:28.501 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:28.501 Library m found: YES 00:01:28.501 Library numa found: YES 00:01:28.501 Has header "numaif.h" : YES 00:01:28.501 Library fdt found: NO 00:01:28.501 Library execinfo found: NO 00:01:28.501 Has header "execinfo.h" : YES 00:01:28.501 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.501 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.501 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.501 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.501 Run-time dependency openssl found: YES 3.0.9 00:01:28.501 Run-time dependency libpcap found: YES 1.10.4 00:01:28.501 Has header "pcap.h" with dependency libpcap: YES 00:01:28.501 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.501 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.501 Compiler for C supports arguments -Wformat: YES 00:01:28.501 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.501 Compiler for C supports arguments -Wformat-security: NO 00:01:28.501 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.501 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.501 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.501 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.501 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.501 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.501 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.501 Compiler for C supports arguments -Wundef: YES 00:01:28.501 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.501 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.501 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.501 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.501 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.501 Program objdump found: YES (/usr/bin/objdump) 00:01:28.501 Compiler for C supports arguments -mavx512f: YES 00:01:28.501 Checking if "AVX512 checking" compiles: YES 00:01:28.501 Fetching value of define "__SSE4_2__" : 1 00:01:28.501 Fetching value of define "__AES__" : 1 00:01:28.501 Fetching value of define "__AVX__" : 1 00:01:28.501 Fetching value of define "__AVX2__" : (undefined) 00:01:28.501 Fetching value of define "__AVX512BW__" : (undefined) 00:01:28.501 Fetching value of define "__AVX512CD__" : (undefined) 00:01:28.501 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:28.501 Fetching value of define "__AVX512F__" : (undefined) 00:01:28.501 Fetching value of define "__AVX512VL__" : (undefined) 00:01:28.501 Fetching value of define "__PCLMUL__" : 1 00:01:28.501 Fetching value of define "__RDRND__" : 1 00:01:28.501 Fetching value of define "__RDSEED__" : (undefined) 00:01:28.501 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.501 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.501 Message: lib/log: Defining dependency "log" 00:01:28.501 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.501 Message: lib/argparse: Defining dependency "argparse" 00:01:28.501 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.501 Checking for function "getentropy" : NO 00:01:28.501 Message: lib/eal: Defining dependency "eal" 00:01:28.501 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:28.501 Message: lib/ring: Defining dependency "ring" 00:01:28.501 Message: lib/rcu: Defining dependency "rcu" 00:01:28.501 Message: lib/mempool: Defining dependency "mempool" 00:01:28.501 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.501 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.501 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.501 Compiler for C supports arguments -mpclmul: YES 00:01:28.501 Compiler for C supports arguments -maes: YES 00:01:28.501 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.501 Compiler for C supports arguments -mavx512bw: YES 00:01:28.501 Compiler for C supports arguments -mavx512dq: YES 00:01:28.501 Compiler for C supports arguments -mavx512vl: YES 00:01:28.501 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.501 Compiler for C supports arguments -mavx2: YES 00:01:28.501 Compiler for C supports arguments -mavx: YES 00:01:28.501 Message: lib/net: Defining dependency "net" 00:01:28.501 Message: lib/meter: Defining dependency "meter" 00:01:28.501 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.501 Message: lib/pci: Defining dependency "pci" 00:01:28.501 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.501 Message: lib/metrics: Defining dependency "metrics" 00:01:28.501 Message: lib/hash: Defining dependency "hash" 00:01:28.501 Message: lib/timer: Defining dependency "timer" 00:01:28.501 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:28.501 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:28.501 Message: lib/acl: Defining dependency "acl" 00:01:28.501 Message: lib/bbdev: Defining dependency "bbdev" 00:01:28.501 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:28.501 Run-time dependency libelf found: YES 0.190 00:01:28.501 Message: lib/bpf: Defining dependency "bpf" 00:01:28.501 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:28.501 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.501 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.501 Message: lib/distributor: Defining dependency "distributor" 00:01:28.501 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.501 Message: lib/efd: Defining dependency "efd" 00:01:28.501 Message: lib/eventdev: Defining dependency "eventdev" 00:01:28.501 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:28.501 Message: lib/gpudev: Defining dependency "gpudev" 00:01:28.501 Message: lib/gro: Defining dependency "gro" 00:01:28.501 Message: lib/gso: Defining dependency "gso" 00:01:28.501 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:28.501 Message: lib/jobstats: Defining dependency "jobstats" 00:01:28.501 Message: lib/latencystats: Defining dependency "latencystats" 00:01:28.501 Message: lib/lpm: Defining dependency "lpm" 00:01:28.501 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:28.501 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:28.501 Message: lib/member: Defining dependency "member" 00:01:28.501 Message: lib/pcapng: Defining dependency "pcapng" 00:01:28.501 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.501 Message: lib/power: Defining dependency "power" 00:01:28.501 Message: lib/rawdev: Defining dependency "rawdev" 00:01:28.501 Message: lib/regexdev: Defining dependency "regexdev" 00:01:28.501 Message: lib/mldev: Defining dependency "mldev" 00:01:28.501 Message: lib/rib: Defining dependency "rib" 00:01:28.501 Message: lib/reorder: Defining dependency "reorder" 00:01:28.501 Message: lib/sched: Defining dependency "sched" 00:01:28.501 Message: lib/security: Defining dependency "security" 00:01:28.501 Message: lib/stack: Defining dependency "stack" 00:01:28.501 Has header "linux/userfaultfd.h" : YES 00:01:28.501 Has header "linux/vduse.h" : YES 00:01:28.501 Message: lib/vhost: Defining dependency "vhost" 00:01:28.501 Message: lib/ipsec: Defining dependency "ipsec" 00:01:28.501 Message: lib/pdcp: Defining dependency "pdcp" 00:01:28.501 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.501 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.501 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:28.501 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:28.501 Message: lib/fib: Defining dependency "fib" 00:01:28.501 Message: lib/port: Defining dependency "port" 00:01:28.501 Message: lib/pdump: Defining dependency "pdump" 00:01:28.501 Message: lib/table: Defining dependency "table" 00:01:28.501 Message: lib/pipeline: Defining dependency "pipeline" 00:01:28.501 Message: lib/graph: Defining dependency "graph" 00:01:28.501 Message: lib/node: Defining dependency "node" 00:01:29.441 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.441 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.441 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.442 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.442 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.442 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.442 Compiler for C supports arguments -Wno-format: YES 00:01:29.442 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.442 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.442 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.442 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.442 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.442 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.442 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.442 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.442 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.442 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.442 Has header "sys/epoll.h" : YES 00:01:29.442 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.442 Configuring doxy-api-html.conf using configuration 00:01:29.442 Configuring doxy-api-man.conf using configuration 00:01:29.442 Program mandb found: YES (/usr/bin/mandb) 00:01:29.442 Program sphinx-build found: NO 00:01:29.442 Configuring rte_build_config.h using configuration 00:01:29.442 Message: 00:01:29.442 ================= 00:01:29.442 Applications Enabled 00:01:29.442 ================= 00:01:29.442 00:01:29.442 apps: 00:01:29.442 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:29.442 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:29.442 test-pmd, test-regex, test-sad, test-security-perf, 00:01:29.442 00:01:29.442 Message: 00:01:29.442 ================= 00:01:29.442 Libraries Enabled 00:01:29.442 ================= 00:01:29.442 00:01:29.442 libs: 00:01:29.442 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:29.442 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:29.442 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:29.442 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:29.442 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:29.442 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:29.442 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:29.442 graph, node, 00:01:29.442 00:01:29.442 Message: 00:01:29.442 =============== 00:01:29.442 Drivers Enabled 00:01:29.442 =============== 00:01:29.442 00:01:29.442 common: 00:01:29.442 00:01:29.442 bus: 00:01:29.442 pci, vdev, 00:01:29.442 mempool: 00:01:29.442 ring, 00:01:29.442 dma: 00:01:29.442 00:01:29.442 net: 00:01:29.442 i40e, 00:01:29.442 raw: 00:01:29.442 00:01:29.442 crypto: 00:01:29.442 00:01:29.442 compress: 00:01:29.442 00:01:29.442 regex: 00:01:29.442 00:01:29.442 ml: 00:01:29.442 00:01:29.442 vdpa: 00:01:29.442 00:01:29.442 event: 00:01:29.442 00:01:29.442 baseband: 00:01:29.442 00:01:29.442 gpu: 00:01:29.442 00:01:29.442 00:01:29.442 Message: 00:01:29.442 ================= 00:01:29.442 Content Skipped 00:01:29.442 ================= 00:01:29.442 00:01:29.442 apps: 00:01:29.442 00:01:29.442 libs: 00:01:29.442 00:01:29.442 drivers: 00:01:29.442 common/cpt: not in enabled drivers build config 00:01:29.442 common/dpaax: not in enabled drivers build config 00:01:29.442 common/iavf: not in enabled drivers build config 00:01:29.442 common/idpf: not in enabled drivers build config 00:01:29.442 common/ionic: not in enabled drivers build config 00:01:29.442 common/mvep: not in enabled drivers build config 00:01:29.442 common/octeontx: not in enabled drivers build config 00:01:29.442 bus/auxiliary: not in enabled drivers build config 00:01:29.442 bus/cdx: not in enabled drivers build config 00:01:29.442 bus/dpaa: not in enabled drivers build config 00:01:29.442 bus/fslmc: not in enabled drivers build config 00:01:29.442 bus/ifpga: not in enabled drivers build config 00:01:29.442 bus/platform: not in enabled drivers build config 00:01:29.442 bus/uacce: not in enabled drivers build config 00:01:29.442 bus/vmbus: not in enabled drivers build config 00:01:29.442 common/cnxk: not in enabled drivers build config 00:01:29.442 common/mlx5: not in enabled drivers build config 00:01:29.442 common/nfp: not in enabled drivers build config 00:01:29.442 common/nitrox: not in enabled drivers build config 00:01:29.442 common/qat: not in enabled drivers build config 00:01:29.442 common/sfc_efx: not in enabled drivers build config 00:01:29.442 mempool/bucket: not in enabled drivers build config 00:01:29.442 mempool/cnxk: not in enabled drivers build config 00:01:29.442 mempool/dpaa: not in enabled drivers build config 00:01:29.442 mempool/dpaa2: not in enabled drivers build config 00:01:29.442 mempool/octeontx: not in enabled drivers build config 00:01:29.442 mempool/stack: not in enabled drivers build config 00:01:29.442 dma/cnxk: not in enabled drivers build config 00:01:29.442 dma/dpaa: not in enabled drivers build config 00:01:29.442 dma/dpaa2: not in enabled drivers build config 00:01:29.442 dma/hisilicon: not in enabled drivers build config 00:01:29.442 dma/idxd: not in enabled drivers build config 00:01:29.442 dma/ioat: not in enabled drivers build config 00:01:29.442 dma/odm: not in enabled drivers build config 00:01:29.442 dma/skeleton: not in enabled drivers build config 00:01:29.442 net/af_packet: not in enabled drivers build config 00:01:29.442 net/af_xdp: not in enabled drivers build config 00:01:29.442 net/ark: not in enabled drivers build config 00:01:29.442 net/atlantic: not in enabled drivers build config 00:01:29.442 net/avp: not in enabled drivers build config 00:01:29.442 net/axgbe: not in enabled drivers build config 00:01:29.442 net/bnx2x: not in enabled drivers build config 00:01:29.442 net/bnxt: not in enabled drivers build config 00:01:29.442 net/bonding: not in enabled drivers build config 00:01:29.442 net/cnxk: not in enabled drivers build config 00:01:29.442 net/cpfl: not in enabled drivers build config 00:01:29.442 net/cxgbe: not in enabled drivers build config 00:01:29.442 net/dpaa: not in enabled drivers build config 00:01:29.442 net/dpaa2: not in enabled drivers build config 00:01:29.442 net/e1000: not in enabled drivers build config 00:01:29.442 net/ena: not in enabled drivers build config 00:01:29.442 net/enetc: not in enabled drivers build config 00:01:29.442 net/enetfec: not in enabled drivers build config 00:01:29.442 net/enic: not in enabled drivers build config 00:01:29.442 net/failsafe: not in enabled drivers build config 00:01:29.442 net/fm10k: not in enabled drivers build config 00:01:29.442 net/gve: not in enabled drivers build config 00:01:29.442 net/hinic: not in enabled drivers build config 00:01:29.442 net/hns3: not in enabled drivers build config 00:01:29.442 net/iavf: not in enabled drivers build config 00:01:29.442 net/ice: not in enabled drivers build config 00:01:29.442 net/idpf: not in enabled drivers build config 00:01:29.442 net/igc: not in enabled drivers build config 00:01:29.442 net/ionic: not in enabled drivers build config 00:01:29.442 net/ipn3ke: not in enabled drivers build config 00:01:29.442 net/ixgbe: not in enabled drivers build config 00:01:29.442 net/mana: not in enabled drivers build config 00:01:29.442 net/memif: not in enabled drivers build config 00:01:29.442 net/mlx4: not in enabled drivers build config 00:01:29.442 net/mlx5: not in enabled drivers build config 00:01:29.442 net/mvneta: not in enabled drivers build config 00:01:29.442 net/mvpp2: not in enabled drivers build config 00:01:29.442 net/netvsc: not in enabled drivers build config 00:01:29.442 net/nfb: not in enabled drivers build config 00:01:29.442 net/nfp: not in enabled drivers build config 00:01:29.442 net/ngbe: not in enabled drivers build config 00:01:29.442 net/ntnic: not in enabled drivers build config 00:01:29.442 net/null: not in enabled drivers build config 00:01:29.442 net/octeontx: not in enabled drivers build config 00:01:29.442 net/octeon_ep: not in enabled drivers build config 00:01:29.442 net/pcap: not in enabled drivers build config 00:01:29.442 net/pfe: not in enabled drivers build config 00:01:29.442 net/qede: not in enabled drivers build config 00:01:29.442 net/ring: not in enabled drivers build config 00:01:29.442 net/sfc: not in enabled drivers build config 00:01:29.442 net/softnic: not in enabled drivers build config 00:01:29.442 net/tap: not in enabled drivers build config 00:01:29.442 net/thunderx: not in enabled drivers build config 00:01:29.442 net/txgbe: not in enabled drivers build config 00:01:29.442 net/vdev_netvsc: not in enabled drivers build config 00:01:29.442 net/vhost: not in enabled drivers build config 00:01:29.442 net/virtio: not in enabled drivers build config 00:01:29.442 net/vmxnet3: not in enabled drivers build config 00:01:29.443 raw/cnxk_bphy: not in enabled drivers build config 00:01:29.443 raw/cnxk_gpio: not in enabled drivers build config 00:01:29.443 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:29.443 raw/ifpga: not in enabled drivers build config 00:01:29.443 raw/ntb: not in enabled drivers build config 00:01:29.443 raw/skeleton: not in enabled drivers build config 00:01:29.443 crypto/armv8: not in enabled drivers build config 00:01:29.443 crypto/bcmfs: not in enabled drivers build config 00:01:29.443 crypto/caam_jr: not in enabled drivers build config 00:01:29.443 crypto/ccp: not in enabled drivers build config 00:01:29.443 crypto/cnxk: not in enabled drivers build config 00:01:29.443 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.443 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.443 crypto/ionic: not in enabled drivers build config 00:01:29.443 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.443 crypto/mlx5: not in enabled drivers build config 00:01:29.443 crypto/mvsam: not in enabled drivers build config 00:01:29.443 crypto/nitrox: not in enabled drivers build config 00:01:29.443 crypto/null: not in enabled drivers build config 00:01:29.443 crypto/octeontx: not in enabled drivers build config 00:01:29.443 crypto/openssl: not in enabled drivers build config 00:01:29.443 crypto/scheduler: not in enabled drivers build config 00:01:29.443 crypto/uadk: not in enabled drivers build config 00:01:29.443 crypto/virtio: not in enabled drivers build config 00:01:29.443 compress/isal: not in enabled drivers build config 00:01:29.443 compress/mlx5: not in enabled drivers build config 00:01:29.443 compress/nitrox: not in enabled drivers build config 00:01:29.443 compress/octeontx: not in enabled drivers build config 00:01:29.443 compress/uadk: not in enabled drivers build config 00:01:29.443 compress/zlib: not in enabled drivers build config 00:01:29.443 regex/mlx5: not in enabled drivers build config 00:01:29.443 regex/cn9k: not in enabled drivers build config 00:01:29.443 ml/cnxk: not in enabled drivers build config 00:01:29.443 vdpa/ifc: not in enabled drivers build config 00:01:29.443 vdpa/mlx5: not in enabled drivers build config 00:01:29.443 vdpa/nfp: not in enabled drivers build config 00:01:29.443 vdpa/sfc: not in enabled drivers build config 00:01:29.443 event/cnxk: not in enabled drivers build config 00:01:29.443 event/dlb2: not in enabled drivers build config 00:01:29.443 event/dpaa: not in enabled drivers build config 00:01:29.443 event/dpaa2: not in enabled drivers build config 00:01:29.443 event/dsw: not in enabled drivers build config 00:01:29.443 event/opdl: not in enabled drivers build config 00:01:29.443 event/skeleton: not in enabled drivers build config 00:01:29.443 event/sw: not in enabled drivers build config 00:01:29.443 event/octeontx: not in enabled drivers build config 00:01:29.443 baseband/acc: not in enabled drivers build config 00:01:29.443 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:29.443 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:29.443 baseband/la12xx: not in enabled drivers build config 00:01:29.443 baseband/null: not in enabled drivers build config 00:01:29.443 baseband/turbo_sw: not in enabled drivers build config 00:01:29.443 gpu/cuda: not in enabled drivers build config 00:01:29.443 00:01:29.443 00:01:29.443 Build targets in project: 224 00:01:29.443 00:01:29.443 DPDK 24.07.0-rc3 00:01:29.443 00:01:29.443 User defined options 00:01:29.443 libdir : lib 00:01:29.443 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.443 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:29.443 c_link_args : 00:01:29.443 enable_docs : false 00:01:29.443 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.443 enable_kmods : false 00:01:29.443 machine : native 00:01:29.443 tests : false 00:01:29.443 00:01:29.443 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.443 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:29.443 02:00:57 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:29.710 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:29.710 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.710 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.710 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.710 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.710 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.710 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.710 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.710 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.710 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.710 [10/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.710 [11/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.710 [12/723] Linking static target lib/librte_kvargs.a 00:01:29.970 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.970 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.970 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.970 [16/723] Linking static target lib/librte_log.a 00:01:30.231 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:30.231 [18/723] Linking static target lib/librte_argparse.a 00:01:30.231 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.496 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.758 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.758 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.758 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.758 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.758 [25/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.758 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.758 [27/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.758 [28/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.758 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.758 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.758 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.758 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.758 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.758 [34/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.758 [35/723] Linking target lib/librte_log.so.24.2 00:01:30.758 [36/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.758 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.758 [38/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.758 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.758 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.022 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.022 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.022 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.022 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.022 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.022 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.022 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.022 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.022 [49/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.022 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.022 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.022 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.022 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.022 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.022 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.022 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.022 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.022 [58/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:31.022 [59/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.022 [60/723] Linking target lib/librte_kvargs.so.24.2 00:01:31.022 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.022 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.283 [63/723] Linking target lib/librte_argparse.so.24.2 00:01:31.283 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.283 [65/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:31.283 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.283 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.548 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.548 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.548 [70/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.548 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.548 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.807 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.807 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.807 [75/723] Linking static target lib/librte_pci.a 00:01:31.807 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.807 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:31.807 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.807 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.807 [80/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.068 [81/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.068 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.068 [83/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.068 [84/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.068 [85/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.068 [86/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.068 [87/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.068 [88/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.068 [89/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.068 [90/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.068 [91/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.068 [92/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.068 [93/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.068 [94/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.068 [95/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.068 [96/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.068 [97/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.329 [98/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.329 [99/723] Linking static target lib/librte_ring.a 00:01:32.329 [100/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.329 [101/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.329 [102/723] Linking static target lib/librte_meter.a 00:01:32.329 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.329 [104/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.329 [105/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.329 [106/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.329 [107/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.329 [108/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.329 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.329 [110/723] Linking static target lib/librte_telemetry.a 00:01:32.329 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.329 [112/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.595 [113/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.595 [114/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.595 [115/723] Linking static target lib/librte_net.a 00:01:32.595 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.595 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.595 [118/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.595 [119/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.856 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.856 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.856 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.856 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.856 [124/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.856 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.856 [126/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.856 [127/723] Linking static target lib/librte_mempool.a 00:01:32.856 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:33.118 [129/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.118 [130/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.118 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:33.118 [132/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.118 [133/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:33.118 [134/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.118 [135/723] Linking target lib/librte_telemetry.so.24.2 00:01:33.118 [136/723] Linking static target lib/librte_eal.a 00:01:33.118 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.118 [138/723] Linking static target lib/librte_cmdline.a 00:01:33.381 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:33.381 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:33.381 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.381 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:33.381 [143/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:33.381 [144/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:33.381 [145/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:33.381 [146/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:33.381 [147/723] Linking static target lib/librte_cfgfile.a 00:01:33.381 [148/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.381 [149/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:33.381 [150/723] Linking static target lib/librte_metrics.a 00:01:33.642 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.642 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:33.642 [153/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:33.642 [154/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.642 [155/723] Linking static target lib/librte_rcu.a 00:01:33.642 [156/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:33.642 [157/723] Linking static target lib/librte_bitratestats.a 00:01:33.643 [158/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:33.643 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:33.905 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:33.905 [161/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:33.905 [162/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.905 [163/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.905 [164/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.905 [165/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.905 [166/723] Linking static target lib/librte_mbuf.a 00:01:33.905 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:33.905 [168/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:34.197 [169/723] Linking static target lib/librte_timer.a 00:01:34.197 [170/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.197 [171/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.197 [172/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.197 [173/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:34.197 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.197 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:34.197 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:34.197 [177/723] Linking static target lib/librte_bbdev.a 00:01:34.197 [178/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.197 [179/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:34.503 [180/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:34.503 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.503 [182/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.503 [183/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:34.503 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.503 [185/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:34.503 [186/723] Linking static target lib/librte_compressdev.a 00:01:34.503 [187/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.503 [188/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.503 [189/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:34.503 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.767 [191/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.767 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.767 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.030 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:35.292 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:35.292 [196/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.292 [197/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.292 [198/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:35.292 [199/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.292 [200/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:35.292 [201/723] Linking static target lib/librte_dmadev.a 00:01:35.292 [202/723] Linking static target lib/librte_distributor.a 00:01:35.553 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:35.553 [204/723] Linking static target lib/librte_bpf.a 00:01:35.553 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:35.553 [206/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:35.553 [207/723] Linking static target lib/librte_dispatcher.a 00:01:35.553 [208/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:35.553 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.553 [210/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.553 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:35.553 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:35.553 [213/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.553 [214/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.815 [215/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.815 [216/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.815 [217/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.815 [218/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.815 [219/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.815 [220/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.816 [221/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.816 [222/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.816 [223/723] Linking static target lib/librte_gpudev.a 00:01:35.816 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.816 [225/723] Linking static target lib/librte_gro.a 00:01:35.816 [226/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:35.816 [227/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:35.816 [228/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.816 [229/723] Linking static target lib/librte_jobstats.a 00:01:36.074 [230/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:36.074 [231/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.074 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:36.074 [233/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:36.074 [234/723] Linking static target lib/librte_gso.a 00:01:36.074 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.074 [236/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.337 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:36.337 [238/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:36.337 [239/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.337 [240/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:36.337 [241/723] Linking static target lib/librte_latencystats.a 00:01:36.337 [242/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:36.337 [243/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.337 [244/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.337 [245/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:36.337 [246/723] Linking static target lib/librte_ip_frag.a 00:01:36.337 [247/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:36.604 [248/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:36.604 [249/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:36.604 [250/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:36.604 [251/723] Linking static target lib/librte_efd.a 00:01:36.604 [252/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:36.604 [253/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:36.604 [254/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:36.604 [255/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:36.604 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.862 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:36.862 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:36.862 [259/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.862 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.862 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.862 [262/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.126 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:37.126 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:37.126 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.126 [266/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.126 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.126 [268/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.126 [269/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.126 [270/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:37.388 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:37.388 [272/723] Linking static target lib/librte_regexdev.a 00:01:37.388 [273/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:37.388 [274/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:37.388 [275/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:37.388 [276/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.388 [277/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:37.388 [278/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:37.388 [279/723] Linking static target lib/librte_rawdev.a 00:01:37.388 [280/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:37.388 [281/723] Linking static target lib/librte_pcapng.a 00:01:37.388 [282/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:37.649 [283/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:37.649 [284/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:37.649 [285/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.649 [286/723] Linking static target lib/librte_power.a 00:01:37.649 [287/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:37.649 [288/723] Linking static target lib/librte_stack.a 00:01:37.649 [289/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:37.649 [290/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:37.649 [291/723] Linking static target lib/librte_mldev.a 00:01:37.649 [292/723] Linking static target lib/librte_lpm.a 00:01:37.649 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:37.912 [294/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:37.912 [295/723] Linking static target lib/acl/libavx2_tmp.a 00:01:37.912 [296/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.912 [297/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.912 [298/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:37.912 [299/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.912 [300/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:37.912 [301/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.912 [302/723] Linking static target lib/librte_reorder.a 00:01:37.912 [303/723] Linking static target lib/librte_cryptodev.a 00:01:37.912 [304/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.176 [305/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.176 [306/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.176 [307/723] Linking static target lib/librte_security.a 00:01:38.176 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.176 [309/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.176 [310/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.176 [311/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:38.176 [312/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.176 [313/723] Linking static target lib/librte_hash.a 00:01:38.440 [314/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.440 [315/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.440 [316/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.440 [317/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:38.440 [318/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:38.440 [319/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:38.440 [320/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.441 [321/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.441 [322/723] Linking static target lib/acl/libavx512_tmp.a 00:01:38.441 [323/723] Linking static target lib/librte_acl.a 00:01:38.441 [324/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.441 [325/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.441 [326/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:38.441 [327/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:38.441 [328/723] Linking static target lib/librte_rib.a 00:01:38.703 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:38.703 [330/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.703 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:38.703 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:38.703 [333/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:38.703 [334/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.703 [335/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:38.703 [336/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.703 [337/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:38.968 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:38.968 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.968 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:39.231 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.231 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:39.231 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.231 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:39.804 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:39.804 [346/723] Linking static target lib/librte_eventdev.a 00:01:39.804 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.804 [348/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.804 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.804 [350/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:39.804 [351/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.804 [352/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.804 [353/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.804 [354/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.804 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:40.068 [356/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.068 [357/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.069 [358/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:40.069 [359/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:40.069 [360/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:40.069 [361/723] Linking static target lib/librte_member.a 00:01:40.069 [362/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:40.069 [363/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:40.069 [364/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.069 [365/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:40.069 [366/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:40.069 [367/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:40.069 [368/723] Linking static target lib/librte_sched.a 00:01:40.069 [369/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:40.069 [370/723] Linking static target lib/librte_fib.a 00:01:40.332 [371/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:40.332 [372/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:40.332 [373/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:40.332 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:40.332 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:40.332 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:40.332 [377/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:40.333 [378/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:40.595 [379/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:40.595 [380/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.595 [381/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:40.595 [382/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.595 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.595 [384/723] Linking static target lib/librte_ethdev.a 00:01:40.595 [385/723] Linking static target lib/librte_ipsec.a 00:01:40.595 [386/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.861 [387/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.861 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.861 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.861 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:41.122 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:41.122 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:41.122 [393/723] Linking static target lib/librte_pdump.a 00:01:41.122 [394/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:41.122 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:41.122 [396/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.122 [397/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:41.122 [398/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:41.122 [399/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:41.389 [400/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:41.389 [401/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:41.389 [402/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:41.389 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:41.389 [404/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:41.389 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:41.389 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:41.389 [407/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.651 [408/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:41.651 [409/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:41.651 [410/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:41.651 [411/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:41.651 [412/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.651 [413/723] Linking static target lib/librte_pdcp.a 00:01:41.651 [414/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:41.651 [415/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:41.651 [416/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:41.651 [417/723] Linking static target lib/librte_table.a 00:01:41.651 [418/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:41.914 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:41.914 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:41.914 [421/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:42.175 [422/723] Linking static target lib/librte_graph.a 00:01:42.175 [423/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.175 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.175 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.439 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.439 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.439 [428/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:42.439 [429/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:42.439 [430/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:42.439 [431/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:42.439 [432/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:42.439 [433/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:42.439 [434/723] Linking static target lib/librte_port.a 00:01:42.439 [435/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:42.707 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:42.707 [437/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:42.707 [438/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:42.707 [439/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.707 [440/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.707 [441/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.968 [442/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:42.968 [443/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.968 [444/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.968 [445/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.968 [446/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:42.968 [447/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.968 [448/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.968 [449/723] Linking static target drivers/librte_bus_vdev.a 00:01:42.968 [450/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.229 [451/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.229 [452/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:43.229 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:43.229 [454/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:43.229 [455/723] Linking static target lib/librte_node.a 00:01:43.229 [456/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.229 [457/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.229 [458/723] Linking static target drivers/librte_bus_pci.a 00:01:43.229 [459/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:43.229 [460/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.493 [461/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:43.493 [462/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.493 [463/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:43.493 [464/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.493 [465/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:43.493 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:43.493 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:43.493 [468/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:43.758 [469/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:43.758 [470/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:43.758 [471/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.758 [472/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:43.758 [473/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:43.758 [474/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.758 [475/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.027 [476/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:44.027 [477/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:44.027 [478/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:44.027 [479/723] Linking target lib/librte_eal.so.24.2 00:01:44.027 [480/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:44.027 [481/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.027 [482/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:44.289 [483/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.289 [484/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.289 [485/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.289 [486/723] Linking static target drivers/librte_mempool_ring.a 00:01:44.289 [487/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:44.289 [488/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:44.289 [489/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:44.289 [490/723] Linking target lib/librte_ring.so.24.2 00:01:44.289 [491/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:44.289 [492/723] Linking target lib/librte_meter.so.24.2 00:01:44.551 [493/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:44.551 [494/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:44.551 [495/723] Linking target lib/librte_pci.so.24.2 00:01:44.551 [496/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:44.551 [497/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:44.551 [498/723] Linking target lib/librte_timer.so.24.2 00:01:44.551 [499/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:44.551 [500/723] Linking target lib/librte_acl.so.24.2 00:01:44.551 [501/723] Linking target lib/librte_cfgfile.so.24.2 00:01:44.551 [502/723] Linking target lib/librte_dmadev.so.24.2 00:01:44.551 [503/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:44.551 [504/723] Linking target lib/librte_jobstats.so.24.2 00:01:44.551 [505/723] Linking target lib/librte_rawdev.so.24.2 00:01:44.551 [506/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:44.551 [507/723] Linking target lib/librte_rcu.so.24.2 00:01:44.551 [508/723] Linking target lib/librte_mempool.so.24.2 00:01:44.551 [509/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:44.816 [510/723] Linking target lib/librte_stack.so.24.2 00:01:44.816 [511/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:44.816 [512/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:44.816 [513/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:44.816 [514/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:44.816 [515/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:44.816 [516/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:44.816 [517/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:44.816 [518/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:44.816 [519/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:44.816 [520/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:44.816 [521/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:44.816 [522/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:44.816 [523/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:44.816 [524/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:44.816 [525/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:45.081 [526/723] Linking target lib/librte_rib.so.24.2 00:01:45.081 [527/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:45.081 [528/723] Linking target lib/librte_mbuf.so.24.2 00:01:45.081 [529/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:45.081 [530/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:45.081 [531/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:45.081 [532/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:45.081 [533/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:45.081 [534/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:45.081 [535/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:45.081 [536/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:45.081 [537/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:45.343 [538/723] Linking target lib/librte_compressdev.so.24.2 00:01:45.343 [539/723] Linking target lib/librte_bbdev.so.24.2 00:01:45.343 [540/723] Linking target lib/librte_net.so.24.2 00:01:45.343 [541/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:45.343 [542/723] Linking target lib/librte_distributor.so.24.2 00:01:45.343 [543/723] Linking target lib/librte_gpudev.so.24.2 00:01:45.343 [544/723] Linking target lib/librte_cryptodev.so.24.2 00:01:45.343 [545/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:45.343 [546/723] Linking target lib/librte_regexdev.so.24.2 00:01:45.343 [547/723] Linking target lib/librte_mldev.so.24.2 00:01:45.343 [548/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:45.343 [549/723] Linking target lib/librte_reorder.so.24.2 00:01:45.343 [550/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:45.603 [551/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:45.603 [552/723] Linking target lib/librte_fib.so.24.2 00:01:45.603 [553/723] Linking target lib/librte_sched.so.24.2 00:01:45.603 [554/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:45.603 [555/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:45.603 [556/723] Linking target lib/librte_cmdline.so.24.2 00:01:45.603 [557/723] Linking target lib/librte_hash.so.24.2 00:01:45.603 [558/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:45.603 [559/723] Linking target lib/librte_security.so.24.2 00:01:45.603 [560/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:45.604 [561/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:45.604 [562/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:45.604 [563/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.604 [564/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:45.604 [565/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:45.865 [566/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:45.865 [567/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:45.865 [568/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:45.865 [569/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:45.865 [570/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:45.865 [571/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:45.865 [572/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:45.865 [573/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.865 [574/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:45.865 [575/723] Linking target lib/librte_efd.so.24.2 00:01:45.865 [576/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:45.865 [577/723] Linking target lib/librte_lpm.so.24.2 00:01:45.865 [578/723] Linking target lib/librte_member.so.24.2 00:01:45.865 [579/723] Linking target lib/librte_ipsec.so.24.2 00:01:45.865 [580/723] Linking target lib/librte_pdcp.so.24.2 00:01:45.865 [581/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:46.132 [582/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:46.132 [583/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:46.132 [584/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:46.132 [585/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:46.132 [586/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:46.132 [587/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:46.132 [588/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:46.132 [589/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:46.132 [590/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:46.132 [591/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:46.398 [592/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:46.398 [593/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:46.660 [594/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:46.660 [595/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:46.660 [596/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:46.660 [597/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:46.660 [598/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:46.921 [599/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:46.921 [600/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:46.921 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:46.921 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:46.921 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:46.921 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:46.921 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:46.921 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:47.183 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:47.183 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:47.183 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:47.441 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:47.441 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:47.441 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:47.441 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:47.441 [614/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:47.441 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:47.441 [616/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:47.441 [617/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:47.441 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:47.441 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:47.441 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:47.700 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:47.700 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:47.963 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:47.964 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:48.222 [625/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:48.222 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:48.222 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:48.222 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:48.222 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:48.222 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:48.222 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:48.222 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:48.222 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:48.480 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:48.480 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:48.480 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.480 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:48.480 [638/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.480 [639/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.480 [640/723] Linking target lib/librte_ethdev.so.24.2 00:01:48.480 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:48.480 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:48.739 [643/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.739 [644/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:48.739 [645/723] Linking target lib/librte_gso.so.24.2 00:01:48.739 [646/723] Linking target lib/librte_ip_frag.so.24.2 00:01:48.739 [647/723] Linking target lib/librte_gro.so.24.2 00:01:48.739 [648/723] Linking target lib/librte_eventdev.so.24.2 00:01:48.739 [649/723] Linking target lib/librte_metrics.so.24.2 00:01:48.739 [650/723] Linking target lib/librte_pcapng.so.24.2 00:01:48.739 [651/723] Linking target lib/librte_power.so.24.2 00:01:48.739 [652/723] Linking target lib/librte_bpf.so.24.2 00:01:48.739 [653/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.739 [654/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:48.739 [655/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.739 [656/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.739 [657/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:48.739 [658/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:48.739 [659/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:48.739 [660/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:48.997 [661/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:48.997 [662/723] Linking target lib/librte_dispatcher.so.24.2 00:01:48.997 [663/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:48.997 [664/723] Linking target lib/librte_graph.so.24.2 00:01:48.997 [665/723] Linking target lib/librte_pdump.so.24.2 00:01:48.997 [666/723] Linking target lib/librte_bitratestats.so.24.2 00:01:48.997 [667/723] Linking target lib/librte_latencystats.so.24.2 00:01:48.997 [668/723] Linking target lib/librte_port.so.24.2 00:01:48.997 [669/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.997 [670/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:48.997 [671/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:48.997 [672/723] Linking target lib/librte_node.so.24.2 00:01:49.254 [673/723] Linking target lib/librte_table.so.24.2 00:01:49.254 [674/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:49.254 [675/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:49.255 [676/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:49.255 [677/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:49.512 [678/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:49.769 [679/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.027 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:50.027 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:50.027 [682/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:50.027 [683/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:50.313 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:50.569 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.569 [686/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:50.569 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.569 [688/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.569 [689/723] Linking static target drivers/librte_net_i40e.a 00:01:51.133 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.133 [691/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:51.133 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:51.697 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:51.955 [694/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:52.520 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:00.627 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:00.627 [697/723] Linking static target lib/librte_pipeline.a 00:02:00.627 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.627 [699/723] Linking static target lib/librte_vhost.a 00:02:01.194 [700/723] Linking target app/dpdk-test-dma-perf 00:02:01.194 [701/723] Linking target app/dpdk-test-cmdline 00:02:01.194 [702/723] Linking target app/dpdk-test-acl 00:02:01.194 [703/723] Linking target app/dpdk-test-regex 00:02:01.194 [704/723] Linking target app/dpdk-pdump 00:02:01.194 [705/723] Linking target app/dpdk-test-fib 00:02:01.195 [706/723] Linking target app/dpdk-test-bbdev 00:02:01.195 [707/723] Linking target app/dpdk-test-gpudev 00:02:01.195 [708/723] Linking target app/dpdk-test-security-perf 00:02:01.195 [709/723] Linking target app/dpdk-dumpcap 00:02:01.195 [710/723] Linking target app/dpdk-test-pipeline 00:02:01.195 [711/723] Linking target app/dpdk-test-sad 00:02:01.195 [712/723] Linking target app/dpdk-test-mldev 00:02:01.195 [713/723] Linking target app/dpdk-test-eventdev 00:02:01.195 [714/723] Linking target app/dpdk-test-flow-perf 00:02:01.195 [715/723] Linking target app/dpdk-proc-info 00:02:01.195 [716/723] Linking target app/dpdk-graph 00:02:01.195 [717/723] Linking target app/dpdk-test-crypto-perf 00:02:01.195 [718/723] Linking target app/dpdk-test-compress-perf 00:02:01.195 [719/723] Linking target app/dpdk-testpmd 00:02:01.453 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.711 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:03.086 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.086 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:03.086 02:01:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:03.086 02:01:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:03.086 02:01:30 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:03.086 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:03.086 [0/1] Installing files. 00:02:03.349 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.349 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.349 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.355 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.355 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.615 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.616 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.616 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.616 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.616 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.616 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.616 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.881 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.882 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.882 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:03.882 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:03.882 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:03.882 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:03.882 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:03.882 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:03.882 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:03.882 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:03.882 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:03.882 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:03.882 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:03.882 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:03.882 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:03.882 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:03.882 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:03.882 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:03.882 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:03.882 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:03.882 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:03.882 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:03.882 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:03.882 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:03.882 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:03.882 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:03.882 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:03.882 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:03.882 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:03.882 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:03.882 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:03.882 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:03.883 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:03.883 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:03.883 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:03.883 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:03.883 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:03.883 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:03.883 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:03.883 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:03.883 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:03.883 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:03.883 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:03.883 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:03.883 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:03.883 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:03.883 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:03.883 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:03.883 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:03.883 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:03.883 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:03.883 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:03.883 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:03.883 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:03.883 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:03.883 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:03.883 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:03.883 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:03.883 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:03.883 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:03.883 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:03.883 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:03.883 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:03.883 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:03.883 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:03.883 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:03.883 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:03.883 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:03.883 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:03.883 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:03.883 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:03.883 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:03.883 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:03.883 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:03.883 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:03.883 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:03.883 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:03.883 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:03.883 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:03.883 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:03.883 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:03.883 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:03.883 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:03.883 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:03.883 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:03.883 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:03.883 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:03.883 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:03.883 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:03.883 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:03.883 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:03.883 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:03.883 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:03.883 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:03.883 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:03.883 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:03.883 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:03.883 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:03.883 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:03.883 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:03.883 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:03.883 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:03.883 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:03.883 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:03.883 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:03.883 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:03.883 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:03.884 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:03.884 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:03.884 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:03.884 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:03.884 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:03.884 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:03.884 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:03.884 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:03.884 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:03.884 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:03.884 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:03.884 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:03.884 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:03.884 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:03.884 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:03.884 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:03.884 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:03.884 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:03.884 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:03.884 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:03.884 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:03.884 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:03.884 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:03.884 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:03.884 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:03.884 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:03.884 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:03.884 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:03.884 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:03.884 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:03.884 02:01:31 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:03.884 02:01:31 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.884 00:02:03.884 real 0m39.746s 00:02:03.884 user 13m55.846s 00:02:03.884 sys 1m59.839s 00:02:03.884 02:01:31 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:03.884 02:01:31 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:03.884 ************************************ 00:02:03.884 END TEST build_native_dpdk 00:02:03.884 ************************************ 00:02:03.884 02:01:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.884 02:01:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.884 02:01:31 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:03.884 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:04.143 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.143 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.143 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:04.401 Using 'verbs' RDMA provider 00:02:14.951 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:23.097 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:23.356 Creating mk/config.mk...done. 00:02:23.356 Creating mk/cc.flags.mk...done. 00:02:23.356 Type 'make' to build. 00:02:23.356 02:01:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:23.356 02:01:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.356 02:01:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.356 02:01:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.356 ************************************ 00:02:23.356 START TEST make 00:02:23.356 ************************************ 00:02:23.356 02:01:51 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:23.615 make[1]: Nothing to be done for 'all'. 00:02:25.002 The Meson build system 00:02:25.002 Version: 1.3.1 00:02:25.002 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:25.002 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.002 Build type: native build 00:02:25.002 Project name: libvfio-user 00:02:25.002 Project version: 0.0.1 00:02:25.002 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:25.002 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:25.002 Host machine cpu family: x86_64 00:02:25.002 Host machine cpu: x86_64 00:02:25.002 Run-time dependency threads found: YES 00:02:25.002 Library dl found: YES 00:02:25.002 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:25.002 Run-time dependency json-c found: YES 0.17 00:02:25.002 Run-time dependency cmocka found: YES 1.1.7 00:02:25.002 Program pytest-3 found: NO 00:02:25.002 Program flake8 found: NO 00:02:25.002 Program misspell-fixer found: NO 00:02:25.002 Program restructuredtext-lint found: NO 00:02:25.002 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.002 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.002 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.002 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.002 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.002 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.002 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.002 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.002 Build targets in project: 8 00:02:25.002 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.002 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.002 00:02:25.002 libvfio-user 0.0.1 00:02:25.002 00:02:25.002 User defined options 00:02:25.002 buildtype : debug 00:02:25.002 default_library: shared 00:02:25.002 libdir : /usr/local/lib 00:02:25.002 00:02:25.002 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.956 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.956 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:25.956 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:25.956 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:25.956 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:25.956 [5/37] Compiling C object samples/null.p/null.c.o 00:02:25.956 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:25.956 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:25.956 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:25.956 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:25.956 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:25.956 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:25.956 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:25.956 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.215 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:26.215 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:26.215 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:26.215 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:26.215 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:26.215 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:26.215 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:26.215 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:26.215 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:26.215 [23/37] Compiling C object samples/server.p/server.c.o 00:02:26.215 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:26.215 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:26.215 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:26.215 [27/37] Compiling C object samples/client.p/client.c.o 00:02:26.215 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:26.215 [29/37] Linking target samples/client 00:02:26.215 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:26.477 [31/37] Linking target test/unit_tests 00:02:26.477 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:26.477 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:26.477 [34/37] Linking target samples/server 00:02:26.477 [35/37] Linking target samples/null 00:02:26.477 [36/37] Linking target samples/gpio-pci-idio-16 00:02:26.477 [37/37] Linking target samples/lspci 00:02:26.477 INFO: autodetecting backend as ninja 00:02:26.477 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.737 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.316 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.316 ninja: no work to do. 00:02:42.190 CC lib/ut_mock/mock.o 00:02:42.190 CC lib/ut/ut.o 00:02:42.190 CC lib/log/log.o 00:02:42.190 CC lib/log/log_flags.o 00:02:42.190 CC lib/log/log_deprecated.o 00:02:42.190 LIB libspdk_log.a 00:02:42.190 LIB libspdk_ut.a 00:02:42.190 LIB libspdk_ut_mock.a 00:02:42.190 SO libspdk_ut_mock.so.6.0 00:02:42.190 SO libspdk_ut.so.2.0 00:02:42.190 SO libspdk_log.so.7.0 00:02:42.190 SYMLINK libspdk_ut_mock.so 00:02:42.190 SYMLINK libspdk_ut.so 00:02:42.190 SYMLINK libspdk_log.so 00:02:42.190 CC lib/ioat/ioat.o 00:02:42.190 CC lib/dma/dma.o 00:02:42.190 CXX lib/trace_parser/trace.o 00:02:42.190 CC lib/util/base64.o 00:02:42.190 CC lib/util/bit_array.o 00:02:42.190 CC lib/util/cpuset.o 00:02:42.190 CC lib/util/crc16.o 00:02:42.190 CC lib/util/crc32.o 00:02:42.190 CC lib/util/crc32c.o 00:02:42.190 CC lib/util/crc32_ieee.o 00:02:42.190 CC lib/util/crc64.o 00:02:42.190 CC lib/util/dif.o 00:02:42.190 CC lib/util/fd.o 00:02:42.190 CC lib/util/fd_group.o 00:02:42.190 CC lib/util/file.o 00:02:42.190 CC lib/util/hexlify.o 00:02:42.190 CC lib/util/iov.o 00:02:42.190 CC lib/util/math.o 00:02:42.190 CC lib/util/net.o 00:02:42.190 CC lib/util/pipe.o 00:02:42.190 CC lib/util/strerror_tls.o 00:02:42.190 CC lib/util/string.o 00:02:42.190 CC lib/util/uuid.o 00:02:42.190 CC lib/util/xor.o 00:02:42.190 CC lib/util/zipf.o 00:02:42.190 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.190 CC lib/vfio_user/host/vfio_user.o 00:02:42.190 LIB libspdk_dma.a 00:02:42.190 SO libspdk_dma.so.4.0 00:02:42.190 LIB libspdk_ioat.a 00:02:42.190 SYMLINK libspdk_dma.so 00:02:42.190 SO libspdk_ioat.so.7.0 00:02:42.190 SYMLINK libspdk_ioat.so 00:02:42.190 LIB libspdk_vfio_user.a 00:02:42.190 SO libspdk_vfio_user.so.5.0 00:02:42.190 SYMLINK libspdk_vfio_user.so 00:02:42.190 LIB libspdk_util.a 00:02:42.190 SO libspdk_util.so.10.0 00:02:42.190 SYMLINK libspdk_util.so 00:02:42.190 CC lib/json/json_parse.o 00:02:42.190 CC lib/idxd/idxd.o 00:02:42.190 CC lib/conf/conf.o 00:02:42.190 CC lib/rdma_provider/common.o 00:02:42.190 CC lib/idxd/idxd_user.o 00:02:42.190 CC lib/json/json_util.o 00:02:42.190 CC lib/vmd/vmd.o 00:02:42.190 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.190 CC lib/idxd/idxd_kernel.o 00:02:42.190 CC lib/json/json_write.o 00:02:42.190 CC lib/vmd/led.o 00:02:42.190 CC lib/rdma_utils/rdma_utils.o 00:02:42.190 CC lib/env_dpdk/env.o 00:02:42.190 CC lib/env_dpdk/memory.o 00:02:42.190 CC lib/env_dpdk/pci.o 00:02:42.190 CC lib/env_dpdk/init.o 00:02:42.190 CC lib/env_dpdk/threads.o 00:02:42.190 CC lib/env_dpdk/pci_ioat.o 00:02:42.190 CC lib/env_dpdk/pci_virtio.o 00:02:42.190 CC lib/env_dpdk/pci_vmd.o 00:02:42.190 CC lib/env_dpdk/pci_idxd.o 00:02:42.190 CC lib/env_dpdk/pci_event.o 00:02:42.190 CC lib/env_dpdk/sigbus_handler.o 00:02:42.190 CC lib/env_dpdk/pci_dpdk.o 00:02:42.190 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:42.190 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:42.190 LIB libspdk_trace_parser.a 00:02:42.190 SO libspdk_trace_parser.so.5.0 00:02:42.190 SYMLINK libspdk_trace_parser.so 00:02:42.190 LIB libspdk_conf.a 00:02:42.190 SO libspdk_conf.so.6.0 00:02:42.190 LIB libspdk_rdma_provider.a 00:02:42.190 LIB libspdk_rdma_utils.a 00:02:42.190 SO libspdk_rdma_provider.so.6.0 00:02:42.190 SO libspdk_rdma_utils.so.1.0 00:02:42.190 SYMLINK libspdk_conf.so 00:02:42.190 LIB libspdk_json.a 00:02:42.190 SO libspdk_json.so.6.0 00:02:42.190 SYMLINK libspdk_rdma_provider.so 00:02:42.190 SYMLINK libspdk_rdma_utils.so 00:02:42.190 SYMLINK libspdk_json.so 00:02:42.190 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.190 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.190 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.190 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.190 LIB libspdk_idxd.a 00:02:42.190 SO libspdk_idxd.so.12.0 00:02:42.190 LIB libspdk_vmd.a 00:02:42.190 SYMLINK libspdk_idxd.so 00:02:42.190 SO libspdk_vmd.so.6.0 00:02:42.190 SYMLINK libspdk_vmd.so 00:02:42.190 LIB libspdk_jsonrpc.a 00:02:42.190 SO libspdk_jsonrpc.so.6.0 00:02:42.190 SYMLINK libspdk_jsonrpc.so 00:02:42.190 CC lib/rpc/rpc.o 00:02:42.451 LIB libspdk_rpc.a 00:02:42.710 SO libspdk_rpc.so.6.0 00:02:42.710 SYMLINK libspdk_rpc.so 00:02:42.710 LIB libspdk_env_dpdk.a 00:02:42.710 SO libspdk_env_dpdk.so.15.0 00:02:42.710 CC lib/keyring/keyring.o 00:02:42.710 CC lib/keyring/keyring_rpc.o 00:02:42.710 CC lib/trace/trace.o 00:02:42.710 CC lib/trace/trace_flags.o 00:02:42.710 CC lib/trace/trace_rpc.o 00:02:42.710 CC lib/notify/notify.o 00:02:42.710 CC lib/notify/notify_rpc.o 00:02:42.969 SYMLINK libspdk_env_dpdk.so 00:02:42.969 LIB libspdk_notify.a 00:02:42.969 SO libspdk_notify.so.6.0 00:02:42.969 LIB libspdk_keyring.a 00:02:42.969 SYMLINK libspdk_notify.so 00:02:42.969 LIB libspdk_trace.a 00:02:42.969 SO libspdk_keyring.so.1.0 00:02:42.969 SO libspdk_trace.so.10.0 00:02:43.227 SYMLINK libspdk_keyring.so 00:02:43.227 SYMLINK libspdk_trace.so 00:02:43.227 CC lib/thread/thread.o 00:02:43.227 CC lib/thread/iobuf.o 00:02:43.227 CC lib/sock/sock.o 00:02:43.227 CC lib/sock/sock_rpc.o 00:02:43.794 LIB libspdk_sock.a 00:02:43.794 SO libspdk_sock.so.10.0 00:02:43.794 SYMLINK libspdk_sock.so 00:02:44.052 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.052 CC lib/nvme/nvme_ctrlr.o 00:02:44.052 CC lib/nvme/nvme_fabric.o 00:02:44.052 CC lib/nvme/nvme_ns_cmd.o 00:02:44.052 CC lib/nvme/nvme_ns.o 00:02:44.052 CC lib/nvme/nvme_pcie_common.o 00:02:44.052 CC lib/nvme/nvme_pcie.o 00:02:44.052 CC lib/nvme/nvme_qpair.o 00:02:44.052 CC lib/nvme/nvme.o 00:02:44.052 CC lib/nvme/nvme_quirks.o 00:02:44.052 CC lib/nvme/nvme_transport.o 00:02:44.052 CC lib/nvme/nvme_discovery.o 00:02:44.052 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.052 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.052 CC lib/nvme/nvme_tcp.o 00:02:44.052 CC lib/nvme/nvme_opal.o 00:02:44.052 CC lib/nvme/nvme_io_msg.o 00:02:44.052 CC lib/nvme/nvme_poll_group.o 00:02:44.052 CC lib/nvme/nvme_zns.o 00:02:44.053 CC lib/nvme/nvme_stubs.o 00:02:44.053 CC lib/nvme/nvme_auth.o 00:02:44.053 CC lib/nvme/nvme_cuse.o 00:02:44.053 CC lib/nvme/nvme_vfio_user.o 00:02:44.053 CC lib/nvme/nvme_rdma.o 00:02:44.989 LIB libspdk_thread.a 00:02:44.989 SO libspdk_thread.so.10.1 00:02:44.989 SYMLINK libspdk_thread.so 00:02:45.247 CC lib/blob/blobstore.o 00:02:45.247 CC lib/accel/accel.o 00:02:45.247 CC lib/vfu_tgt/tgt_endpoint.o 00:02:45.247 CC lib/blob/request.o 00:02:45.247 CC lib/init/json_config.o 00:02:45.247 CC lib/accel/accel_rpc.o 00:02:45.247 CC lib/virtio/virtio.o 00:02:45.247 CC lib/blob/zeroes.o 00:02:45.247 CC lib/vfu_tgt/tgt_rpc.o 00:02:45.247 CC lib/accel/accel_sw.o 00:02:45.247 CC lib/init/subsystem.o 00:02:45.247 CC lib/virtio/virtio_vhost_user.o 00:02:45.247 CC lib/blob/blob_bs_dev.o 00:02:45.247 CC lib/virtio/virtio_vfio_user.o 00:02:45.247 CC lib/init/subsystem_rpc.o 00:02:45.247 CC lib/virtio/virtio_pci.o 00:02:45.247 CC lib/init/rpc.o 00:02:45.504 LIB libspdk_init.a 00:02:45.504 SO libspdk_init.so.5.0 00:02:45.504 LIB libspdk_virtio.a 00:02:45.505 LIB libspdk_vfu_tgt.a 00:02:45.505 SYMLINK libspdk_init.so 00:02:45.505 SO libspdk_vfu_tgt.so.3.0 00:02:45.505 SO libspdk_virtio.so.7.0 00:02:45.505 SYMLINK libspdk_vfu_tgt.so 00:02:45.505 SYMLINK libspdk_virtio.so 00:02:45.762 CC lib/event/app.o 00:02:45.762 CC lib/event/reactor.o 00:02:45.762 CC lib/event/log_rpc.o 00:02:45.762 CC lib/event/app_rpc.o 00:02:45.762 CC lib/event/scheduler_static.o 00:02:46.021 LIB libspdk_event.a 00:02:46.021 SO libspdk_event.so.14.0 00:02:46.279 SYMLINK libspdk_event.so 00:02:46.279 LIB libspdk_accel.a 00:02:46.279 SO libspdk_accel.so.16.0 00:02:46.279 SYMLINK libspdk_accel.so 00:02:46.279 LIB libspdk_nvme.a 00:02:46.537 CC lib/bdev/bdev.o 00:02:46.537 CC lib/bdev/bdev_rpc.o 00:02:46.537 CC lib/bdev/bdev_zone.o 00:02:46.537 CC lib/bdev/part.o 00:02:46.537 CC lib/bdev/scsi_nvme.o 00:02:46.537 SO libspdk_nvme.so.13.1 00:02:46.796 SYMLINK libspdk_nvme.so 00:02:48.170 LIB libspdk_blob.a 00:02:48.170 SO libspdk_blob.so.11.0 00:02:48.429 SYMLINK libspdk_blob.so 00:02:48.429 CC lib/lvol/lvol.o 00:02:48.429 CC lib/blobfs/blobfs.o 00:02:48.429 CC lib/blobfs/tree.o 00:02:48.995 LIB libspdk_bdev.a 00:02:48.995 SO libspdk_bdev.so.16.0 00:02:48.995 SYMLINK libspdk_bdev.so 00:02:49.261 CC lib/scsi/dev.o 00:02:49.261 CC lib/nvmf/ctrlr.o 00:02:49.261 CC lib/nbd/nbd.o 00:02:49.261 CC lib/scsi/lun.o 00:02:49.261 CC lib/nvmf/ctrlr_discovery.o 00:02:49.261 CC lib/ftl/ftl_core.o 00:02:49.261 CC lib/ublk/ublk.o 00:02:49.261 CC lib/nbd/nbd_rpc.o 00:02:49.261 CC lib/scsi/port.o 00:02:49.261 CC lib/ublk/ublk_rpc.o 00:02:49.261 CC lib/ftl/ftl_init.o 00:02:49.261 CC lib/scsi/scsi.o 00:02:49.261 CC lib/ftl/ftl_layout.o 00:02:49.261 CC lib/nvmf/ctrlr_bdev.o 00:02:49.261 CC lib/scsi/scsi_bdev.o 00:02:49.261 CC lib/nvmf/subsystem.o 00:02:49.261 CC lib/ftl/ftl_debug.o 00:02:49.261 CC lib/nvmf/nvmf.o 00:02:49.261 CC lib/ftl/ftl_io.o 00:02:49.261 CC lib/scsi/scsi_pr.o 00:02:49.261 CC lib/scsi/scsi_rpc.o 00:02:49.261 CC lib/ftl/ftl_sb.o 00:02:49.261 CC lib/nvmf/nvmf_rpc.o 00:02:49.261 CC lib/nvmf/transport.o 00:02:49.261 CC lib/nvmf/tcp.o 00:02:49.261 CC lib/ftl/ftl_l2p.o 00:02:49.261 CC lib/scsi/task.o 00:02:49.261 CC lib/nvmf/mdns_server.o 00:02:49.261 CC lib/nvmf/stubs.o 00:02:49.261 CC lib/ftl/ftl_l2p_flat.o 00:02:49.261 CC lib/ftl/ftl_band.o 00:02:49.261 CC lib/ftl/ftl_nv_cache.o 00:02:49.261 CC lib/nvmf/vfio_user.o 00:02:49.261 CC lib/ftl/ftl_band_ops.o 00:02:49.261 CC lib/ftl/ftl_writer.o 00:02:49.261 CC lib/nvmf/rdma.o 00:02:49.261 CC lib/nvmf/auth.o 00:02:49.261 CC lib/ftl/ftl_rq.o 00:02:49.261 CC lib/ftl/ftl_reloc.o 00:02:49.261 CC lib/ftl/ftl_l2p_cache.o 00:02:49.261 CC lib/ftl/ftl_p2l.o 00:02:49.261 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.261 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.261 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.261 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.261 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.261 LIB libspdk_blobfs.a 00:02:49.261 SO libspdk_blobfs.so.10.0 00:02:49.520 LIB libspdk_lvol.a 00:02:49.520 SYMLINK libspdk_blobfs.so 00:02:49.520 SO libspdk_lvol.so.10.0 00:02:49.520 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.520 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.520 SYMLINK libspdk_lvol.so 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.800 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.800 CC lib/ftl/utils/ftl_conf.o 00:02:49.800 CC lib/ftl/utils/ftl_md.o 00:02:49.800 CC lib/ftl/utils/ftl_mempool.o 00:02:49.800 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.800 CC lib/ftl/utils/ftl_property.o 00:02:49.800 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.800 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:49.800 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:50.099 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:50.099 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.099 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.099 CC lib/ftl/base/ftl_base_dev.o 00:02:50.099 CC lib/ftl/base/ftl_base_bdev.o 00:02:50.099 CC lib/ftl/ftl_trace.o 00:02:50.099 LIB libspdk_nbd.a 00:02:50.099 SO libspdk_nbd.so.7.0 00:02:50.099 LIB libspdk_scsi.a 00:02:50.358 SO libspdk_scsi.so.9.0 00:02:50.358 SYMLINK libspdk_nbd.so 00:02:50.358 SYMLINK libspdk_scsi.so 00:02:50.358 LIB libspdk_ublk.a 00:02:50.358 SO libspdk_ublk.so.3.0 00:02:50.358 SYMLINK libspdk_ublk.so 00:02:50.358 CC lib/vhost/vhost.o 00:02:50.358 CC lib/iscsi/conn.o 00:02:50.358 CC lib/vhost/vhost_rpc.o 00:02:50.358 CC lib/vhost/vhost_scsi.o 00:02:50.358 CC lib/iscsi/init_grp.o 00:02:50.358 CC lib/iscsi/iscsi.o 00:02:50.358 CC lib/vhost/vhost_blk.o 00:02:50.358 CC lib/iscsi/md5.o 00:02:50.358 CC lib/vhost/rte_vhost_user.o 00:02:50.358 CC lib/iscsi/param.o 00:02:50.358 CC lib/iscsi/portal_grp.o 00:02:50.358 CC lib/iscsi/tgt_node.o 00:02:50.358 CC lib/iscsi/iscsi_subsystem.o 00:02:50.358 CC lib/iscsi/iscsi_rpc.o 00:02:50.616 CC lib/iscsi/task.o 00:02:50.616 LIB libspdk_ftl.a 00:02:50.874 SO libspdk_ftl.so.9.0 00:02:51.132 SYMLINK libspdk_ftl.so 00:02:51.697 LIB libspdk_vhost.a 00:02:51.697 SO libspdk_vhost.so.8.0 00:02:51.697 LIB libspdk_nvmf.a 00:02:51.956 SYMLINK libspdk_vhost.so 00:02:51.956 SO libspdk_nvmf.so.19.0 00:02:51.956 LIB libspdk_iscsi.a 00:02:51.956 SO libspdk_iscsi.so.8.0 00:02:52.214 SYMLINK libspdk_nvmf.so 00:02:52.214 SYMLINK libspdk_iscsi.so 00:02:52.472 CC module/vfu_device/vfu_virtio.o 00:02:52.472 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.472 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.472 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.472 CC module/vfu_device/vfu_virtio_rpc.o 00:02:52.472 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.472 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.472 CC module/keyring/file/keyring.o 00:02:52.472 CC module/accel/error/accel_error.o 00:02:52.472 CC module/scheduler/gscheduler/gscheduler.o 00:02:52.472 CC module/accel/ioat/accel_ioat.o 00:02:52.472 CC module/keyring/file/keyring_rpc.o 00:02:52.472 CC module/accel/error/accel_error_rpc.o 00:02:52.472 CC module/blob/bdev/blob_bdev.o 00:02:52.472 CC module/sock/posix/posix.o 00:02:52.472 CC module/accel/ioat/accel_ioat_rpc.o 00:02:52.472 CC module/accel/dsa/accel_dsa.o 00:02:52.472 CC module/accel/iaa/accel_iaa.o 00:02:52.472 CC module/keyring/linux/keyring.o 00:02:52.472 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.472 CC module/keyring/linux/keyring_rpc.o 00:02:52.472 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.472 LIB libspdk_env_dpdk_rpc.a 00:02:52.472 SO libspdk_env_dpdk_rpc.so.6.0 00:02:52.730 SYMLINK libspdk_env_dpdk_rpc.so 00:02:52.730 LIB libspdk_keyring_linux.a 00:02:52.730 LIB libspdk_keyring_file.a 00:02:52.730 LIB libspdk_scheduler_gscheduler.a 00:02:52.730 LIB libspdk_scheduler_dpdk_governor.a 00:02:52.730 SO libspdk_keyring_linux.so.1.0 00:02:52.730 SO libspdk_keyring_file.so.1.0 00:02:52.730 LIB libspdk_accel_error.a 00:02:52.730 SO libspdk_scheduler_gscheduler.so.4.0 00:02:52.730 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:52.730 LIB libspdk_accel_ioat.a 00:02:52.730 LIB libspdk_scheduler_dynamic.a 00:02:52.730 SO libspdk_accel_error.so.2.0 00:02:52.730 LIB libspdk_accel_iaa.a 00:02:52.730 SO libspdk_accel_ioat.so.6.0 00:02:52.730 SYMLINK libspdk_keyring_linux.so 00:02:52.730 SO libspdk_scheduler_dynamic.so.4.0 00:02:52.730 SYMLINK libspdk_keyring_file.so 00:02:52.730 SYMLINK libspdk_scheduler_gscheduler.so 00:02:52.730 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:52.730 SO libspdk_accel_iaa.so.3.0 00:02:52.730 SYMLINK libspdk_accel_error.so 00:02:52.730 LIB libspdk_accel_dsa.a 00:02:52.730 LIB libspdk_blob_bdev.a 00:02:52.730 SYMLINK libspdk_accel_ioat.so 00:02:52.730 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.730 SYMLINK libspdk_accel_iaa.so 00:02:52.730 SO libspdk_accel_dsa.so.5.0 00:02:52.731 SO libspdk_blob_bdev.so.11.0 00:02:52.989 SYMLINK libspdk_blob_bdev.so 00:02:52.989 SYMLINK libspdk_accel_dsa.so 00:02:52.989 LIB libspdk_vfu_device.a 00:02:52.989 SO libspdk_vfu_device.so.3.0 00:02:53.248 CC module/bdev/gpt/gpt.o 00:02:53.248 CC module/bdev/nvme/bdev_nvme.o 00:02:53.248 CC module/bdev/delay/vbdev_delay.o 00:02:53.248 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.248 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.248 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.248 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.248 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.248 CC module/bdev/malloc/bdev_malloc.o 00:02:53.248 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.248 CC module/bdev/nvme/nvme_rpc.o 00:02:53.248 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.248 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.248 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.248 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.248 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.248 CC module/bdev/nvme/vbdev_opal.o 00:02:53.248 CC module/bdev/error/vbdev_error.o 00:02:53.248 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.248 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.248 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.248 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.248 CC module/bdev/raid/bdev_raid.o 00:02:53.248 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.248 CC module/bdev/split/vbdev_split.o 00:02:53.248 CC module/bdev/ftl/bdev_ftl.o 00:02:53.248 CC module/bdev/aio/bdev_aio.o 00:02:53.248 CC module/bdev/null/bdev_null.o 00:02:53.248 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.248 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.248 CC module/bdev/null/bdev_null_rpc.o 00:02:53.248 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.248 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.248 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.248 CC module/bdev/raid/raid0.o 00:02:53.248 CC module/bdev/raid/raid1.o 00:02:53.248 CC module/bdev/raid/concat.o 00:02:53.248 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.248 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.248 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.248 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.248 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.248 SYMLINK libspdk_vfu_device.so 00:02:53.506 LIB libspdk_sock_posix.a 00:02:53.506 SO libspdk_sock_posix.so.6.0 00:02:53.506 LIB libspdk_blobfs_bdev.a 00:02:53.506 LIB libspdk_bdev_null.a 00:02:53.506 SO libspdk_blobfs_bdev.so.6.0 00:02:53.506 SO libspdk_bdev_null.so.6.0 00:02:53.506 SYMLINK libspdk_sock_posix.so 00:02:53.506 LIB libspdk_bdev_split.a 00:02:53.506 SYMLINK libspdk_blobfs_bdev.so 00:02:53.506 SYMLINK libspdk_bdev_null.so 00:02:53.506 SO libspdk_bdev_split.so.6.0 00:02:53.506 LIB libspdk_bdev_gpt.a 00:02:53.764 SO libspdk_bdev_gpt.so.6.0 00:02:53.764 LIB libspdk_bdev_error.a 00:02:53.764 LIB libspdk_bdev_ftl.a 00:02:53.764 SYMLINK libspdk_bdev_split.so 00:02:53.764 LIB libspdk_bdev_aio.a 00:02:53.764 SO libspdk_bdev_error.so.6.0 00:02:53.764 SO libspdk_bdev_ftl.so.6.0 00:02:53.764 LIB libspdk_bdev_passthru.a 00:02:53.764 SO libspdk_bdev_aio.so.6.0 00:02:53.764 SYMLINK libspdk_bdev_gpt.so 00:02:53.764 SO libspdk_bdev_passthru.so.6.0 00:02:53.764 LIB libspdk_bdev_zone_block.a 00:02:53.764 SYMLINK libspdk_bdev_error.so 00:02:53.764 SYMLINK libspdk_bdev_ftl.so 00:02:53.764 SYMLINK libspdk_bdev_aio.so 00:02:53.764 SO libspdk_bdev_zone_block.so.6.0 00:02:53.764 LIB libspdk_bdev_delay.a 00:02:53.764 SYMLINK libspdk_bdev_passthru.so 00:02:53.764 LIB libspdk_bdev_iscsi.a 00:02:53.764 SO libspdk_bdev_delay.so.6.0 00:02:53.764 LIB libspdk_bdev_malloc.a 00:02:53.764 SYMLINK libspdk_bdev_zone_block.so 00:02:53.764 SO libspdk_bdev_iscsi.so.6.0 00:02:53.764 SO libspdk_bdev_malloc.so.6.0 00:02:53.764 LIB libspdk_bdev_lvol.a 00:02:53.764 SYMLINK libspdk_bdev_delay.so 00:02:53.764 SYMLINK libspdk_bdev_iscsi.so 00:02:53.764 SO libspdk_bdev_lvol.so.6.0 00:02:54.023 SYMLINK libspdk_bdev_malloc.so 00:02:54.023 LIB libspdk_bdev_virtio.a 00:02:54.023 SYMLINK libspdk_bdev_lvol.so 00:02:54.023 SO libspdk_bdev_virtio.so.6.0 00:02:54.023 SYMLINK libspdk_bdev_virtio.so 00:02:54.281 LIB libspdk_bdev_raid.a 00:02:54.539 SO libspdk_bdev_raid.so.6.0 00:02:54.539 SYMLINK libspdk_bdev_raid.so 00:02:55.471 LIB libspdk_bdev_nvme.a 00:02:55.729 SO libspdk_bdev_nvme.so.7.0 00:02:55.729 SYMLINK libspdk_bdev_nvme.so 00:02:55.987 CC module/event/subsystems/sock/sock.o 00:02:55.987 CC module/event/subsystems/keyring/keyring.o 00:02:55.987 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.987 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.987 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.987 CC module/event/subsystems/vmd/vmd.o 00:02:55.987 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.987 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.987 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.245 LIB libspdk_event_keyring.a 00:02:56.245 LIB libspdk_event_vhost_blk.a 00:02:56.245 LIB libspdk_event_vfu_tgt.a 00:02:56.245 LIB libspdk_event_vmd.a 00:02:56.245 LIB libspdk_event_scheduler.a 00:02:56.245 LIB libspdk_event_sock.a 00:02:56.245 SO libspdk_event_keyring.so.1.0 00:02:56.245 SO libspdk_event_vhost_blk.so.3.0 00:02:56.245 LIB libspdk_event_iobuf.a 00:02:56.245 SO libspdk_event_vfu_tgt.so.3.0 00:02:56.245 SO libspdk_event_scheduler.so.4.0 00:02:56.245 SO libspdk_event_vmd.so.6.0 00:02:56.245 SO libspdk_event_sock.so.5.0 00:02:56.245 SO libspdk_event_iobuf.so.3.0 00:02:56.245 SYMLINK libspdk_event_keyring.so 00:02:56.245 SYMLINK libspdk_event_vhost_blk.so 00:02:56.245 SYMLINK libspdk_event_vfu_tgt.so 00:02:56.245 SYMLINK libspdk_event_scheduler.so 00:02:56.245 SYMLINK libspdk_event_sock.so 00:02:56.245 SYMLINK libspdk_event_vmd.so 00:02:56.245 SYMLINK libspdk_event_iobuf.so 00:02:56.502 CC module/event/subsystems/accel/accel.o 00:02:56.760 LIB libspdk_event_accel.a 00:02:56.760 SO libspdk_event_accel.so.6.0 00:02:56.760 SYMLINK libspdk_event_accel.so 00:02:57.018 CC module/event/subsystems/bdev/bdev.o 00:02:57.018 LIB libspdk_event_bdev.a 00:02:57.018 SO libspdk_event_bdev.so.6.0 00:02:57.277 SYMLINK libspdk_event_bdev.so 00:02:57.277 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.277 CC module/event/subsystems/ublk/ublk.o 00:02:57.277 CC module/event/subsystems/nbd/nbd.o 00:02:57.277 CC module/event/subsystems/scsi/scsi.o 00:02:57.277 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.534 LIB libspdk_event_nbd.a 00:02:57.534 LIB libspdk_event_ublk.a 00:02:57.534 SO libspdk_event_nbd.so.6.0 00:02:57.534 LIB libspdk_event_scsi.a 00:02:57.534 SO libspdk_event_ublk.so.3.0 00:02:57.534 SO libspdk_event_scsi.so.6.0 00:02:57.534 SYMLINK libspdk_event_nbd.so 00:02:57.534 SYMLINK libspdk_event_ublk.so 00:02:57.534 SYMLINK libspdk_event_scsi.so 00:02:57.534 LIB libspdk_event_nvmf.a 00:02:57.534 SO libspdk_event_nvmf.so.6.0 00:02:57.792 SYMLINK libspdk_event_nvmf.so 00:02:57.792 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.792 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.792 LIB libspdk_event_vhost_scsi.a 00:02:57.792 LIB libspdk_event_iscsi.a 00:02:58.050 SO libspdk_event_vhost_scsi.so.3.0 00:02:58.050 SO libspdk_event_iscsi.so.6.0 00:02:58.050 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.050 SYMLINK libspdk_event_iscsi.so 00:02:58.050 SO libspdk.so.6.0 00:02:58.050 SYMLINK libspdk.so 00:02:58.315 CXX app/trace/trace.o 00:02:58.315 CC app/trace_record/trace_record.o 00:02:58.315 CC app/spdk_lspci/spdk_lspci.o 00:02:58.315 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.315 CC app/spdk_top/spdk_top.o 00:02:58.315 CC app/spdk_nvme_perf/perf.o 00:02:58.315 TEST_HEADER include/spdk/accel.h 00:02:58.315 CC test/rpc_client/rpc_client_test.o 00:02:58.315 TEST_HEADER include/spdk/accel_module.h 00:02:58.315 TEST_HEADER include/spdk/assert.h 00:02:58.315 TEST_HEADER include/spdk/barrier.h 00:02:58.315 TEST_HEADER include/spdk/base64.h 00:02:58.315 TEST_HEADER include/spdk/bdev.h 00:02:58.315 CC app/spdk_nvme_identify/identify.o 00:02:58.315 TEST_HEADER include/spdk/bdev_module.h 00:02:58.315 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.315 TEST_HEADER include/spdk/bit_array.h 00:02:58.315 TEST_HEADER include/spdk/bit_pool.h 00:02:58.315 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.315 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.315 TEST_HEADER include/spdk/blobfs.h 00:02:58.315 TEST_HEADER include/spdk/blob.h 00:02:58.315 TEST_HEADER include/spdk/conf.h 00:02:58.315 TEST_HEADER include/spdk/config.h 00:02:58.315 TEST_HEADER include/spdk/cpuset.h 00:02:58.315 TEST_HEADER include/spdk/crc16.h 00:02:58.315 TEST_HEADER include/spdk/crc32.h 00:02:58.315 TEST_HEADER include/spdk/crc64.h 00:02:58.315 TEST_HEADER include/spdk/dif.h 00:02:58.315 TEST_HEADER include/spdk/dma.h 00:02:58.315 TEST_HEADER include/spdk/endian.h 00:02:58.315 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.315 TEST_HEADER include/spdk/env.h 00:02:58.315 TEST_HEADER include/spdk/event.h 00:02:58.315 TEST_HEADER include/spdk/fd_group.h 00:02:58.315 TEST_HEADER include/spdk/fd.h 00:02:58.315 TEST_HEADER include/spdk/file.h 00:02:58.315 TEST_HEADER include/spdk/ftl.h 00:02:58.315 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.315 TEST_HEADER include/spdk/hexlify.h 00:02:58.316 TEST_HEADER include/spdk/histogram_data.h 00:02:58.316 TEST_HEADER include/spdk/idxd.h 00:02:58.316 TEST_HEADER include/spdk/init.h 00:02:58.316 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.316 TEST_HEADER include/spdk/ioat.h 00:02:58.316 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.316 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.316 TEST_HEADER include/spdk/json.h 00:02:58.316 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.316 TEST_HEADER include/spdk/keyring.h 00:02:58.316 TEST_HEADER include/spdk/keyring_module.h 00:02:58.316 TEST_HEADER include/spdk/likely.h 00:02:58.316 TEST_HEADER include/spdk/lvol.h 00:02:58.316 TEST_HEADER include/spdk/log.h 00:02:58.316 TEST_HEADER include/spdk/memory.h 00:02:58.316 TEST_HEADER include/spdk/mmio.h 00:02:58.316 TEST_HEADER include/spdk/nbd.h 00:02:58.316 TEST_HEADER include/spdk/net.h 00:02:58.316 TEST_HEADER include/spdk/nvme.h 00:02:58.316 TEST_HEADER include/spdk/notify.h 00:02:58.316 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.316 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.316 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.316 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.316 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.316 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.316 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.316 TEST_HEADER include/spdk/nvmf.h 00:02:58.316 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.316 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.316 TEST_HEADER include/spdk/opal.h 00:02:58.316 TEST_HEADER include/spdk/opal_spec.h 00:02:58.316 TEST_HEADER include/spdk/pci_ids.h 00:02:58.316 TEST_HEADER include/spdk/pipe.h 00:02:58.316 TEST_HEADER include/spdk/queue.h 00:02:58.316 TEST_HEADER include/spdk/reduce.h 00:02:58.316 TEST_HEADER include/spdk/rpc.h 00:02:58.316 TEST_HEADER include/spdk/scheduler.h 00:02:58.316 TEST_HEADER include/spdk/scsi.h 00:02:58.316 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.316 TEST_HEADER include/spdk/sock.h 00:02:58.316 TEST_HEADER include/spdk/stdinc.h 00:02:58.316 TEST_HEADER include/spdk/string.h 00:02:58.316 TEST_HEADER include/spdk/thread.h 00:02:58.316 TEST_HEADER include/spdk/trace.h 00:02:58.316 TEST_HEADER include/spdk/trace_parser.h 00:02:58.316 TEST_HEADER include/spdk/tree.h 00:02:58.316 TEST_HEADER include/spdk/ublk.h 00:02:58.316 TEST_HEADER include/spdk/util.h 00:02:58.316 TEST_HEADER include/spdk/uuid.h 00:02:58.316 CC app/spdk_dd/spdk_dd.o 00:02:58.316 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.316 TEST_HEADER include/spdk/version.h 00:02:58.316 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.316 TEST_HEADER include/spdk/vhost.h 00:02:58.316 TEST_HEADER include/spdk/vmd.h 00:02:58.316 TEST_HEADER include/spdk/xor.h 00:02:58.316 TEST_HEADER include/spdk/zipf.h 00:02:58.316 CXX test/cpp_headers/accel.o 00:02:58.316 CXX test/cpp_headers/accel_module.o 00:02:58.316 CXX test/cpp_headers/assert.o 00:02:58.316 CXX test/cpp_headers/barrier.o 00:02:58.316 CXX test/cpp_headers/base64.o 00:02:58.316 CXX test/cpp_headers/bdev.o 00:02:58.316 CXX test/cpp_headers/bdev_module.o 00:02:58.316 CXX test/cpp_headers/bdev_zone.o 00:02:58.316 CXX test/cpp_headers/bit_array.o 00:02:58.316 CXX test/cpp_headers/bit_pool.o 00:02:58.316 CXX test/cpp_headers/blob_bdev.o 00:02:58.316 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.316 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.316 CXX test/cpp_headers/blobfs.o 00:02:58.316 CXX test/cpp_headers/blob.o 00:02:58.316 CXX test/cpp_headers/conf.o 00:02:58.316 CXX test/cpp_headers/config.o 00:02:58.316 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.316 CC app/nvmf_tgt/nvmf_main.o 00:02:58.316 CXX test/cpp_headers/crc16.o 00:02:58.316 CXX test/cpp_headers/cpuset.o 00:02:58.316 CXX test/cpp_headers/crc32.o 00:02:58.316 CC app/spdk_tgt/spdk_tgt.o 00:02:58.316 CC examples/ioat/verify/verify.o 00:02:58.316 CC examples/ioat/perf/perf.o 00:02:58.316 CC examples/util/zipf/zipf.o 00:02:58.316 CC test/thread/poller_perf/poller_perf.o 00:02:58.316 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.316 CC test/env/vtophys/vtophys.o 00:02:58.316 CC test/env/memory/memory_ut.o 00:02:58.316 CC test/app/histogram_perf/histogram_perf.o 00:02:58.316 CC test/app/jsoncat/jsoncat.o 00:02:58.316 CC test/env/pci/pci_ut.o 00:02:58.316 CC app/fio/nvme/fio_plugin.o 00:02:58.316 CC test/app/stub/stub.o 00:02:58.576 CC test/dma/test_dma/test_dma.o 00:02:58.576 CC test/app/bdev_svc/bdev_svc.o 00:02:58.576 CC app/fio/bdev/fio_plugin.o 00:02:58.576 LINK spdk_lspci 00:02:58.576 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.576 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.576 LINK rpc_client_test 00:02:58.576 LINK spdk_nvme_discover 00:02:58.839 LINK vtophys 00:02:58.839 LINK jsoncat 00:02:58.839 CXX test/cpp_headers/crc64.o 00:02:58.839 LINK interrupt_tgt 00:02:58.839 LINK zipf 00:02:58.839 LINK poller_perf 00:02:58.839 CXX test/cpp_headers/dif.o 00:02:58.839 LINK env_dpdk_post_init 00:02:58.839 CXX test/cpp_headers/dma.o 00:02:58.839 CXX test/cpp_headers/endian.o 00:02:58.839 CXX test/cpp_headers/env_dpdk.o 00:02:58.839 LINK nvmf_tgt 00:02:58.839 LINK histogram_perf 00:02:58.839 LINK spdk_trace_record 00:02:58.839 CXX test/cpp_headers/env.o 00:02:58.839 CXX test/cpp_headers/event.o 00:02:58.839 CXX test/cpp_headers/fd_group.o 00:02:58.839 CXX test/cpp_headers/fd.o 00:02:58.839 CXX test/cpp_headers/file.o 00:02:58.839 CXX test/cpp_headers/ftl.o 00:02:58.839 CXX test/cpp_headers/hexlify.o 00:02:58.839 CXX test/cpp_headers/gpt_spec.o 00:02:58.839 CXX test/cpp_headers/histogram_data.o 00:02:58.839 LINK stub 00:02:58.839 LINK iscsi_tgt 00:02:58.839 LINK ioat_perf 00:02:58.839 CXX test/cpp_headers/idxd.o 00:02:58.839 CXX test/cpp_headers/idxd_spec.o 00:02:58.839 LINK verify 00:02:58.839 LINK spdk_tgt 00:02:58.839 LINK bdev_svc 00:02:58.839 CXX test/cpp_headers/init.o 00:02:58.839 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.839 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.839 CXX test/cpp_headers/ioat.o 00:02:59.104 CXX test/cpp_headers/ioat_spec.o 00:02:59.104 CXX test/cpp_headers/iscsi_spec.o 00:02:59.104 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.104 CXX test/cpp_headers/json.o 00:02:59.104 LINK spdk_dd 00:02:59.104 CXX test/cpp_headers/jsonrpc.o 00:02:59.104 CXX test/cpp_headers/keyring.o 00:02:59.104 CXX test/cpp_headers/keyring_module.o 00:02:59.104 CXX test/cpp_headers/likely.o 00:02:59.104 CXX test/cpp_headers/log.o 00:02:59.104 LINK spdk_trace 00:02:59.104 CXX test/cpp_headers/lvol.o 00:02:59.104 CXX test/cpp_headers/memory.o 00:02:59.104 CXX test/cpp_headers/mmio.o 00:02:59.104 CXX test/cpp_headers/nbd.o 00:02:59.104 CXX test/cpp_headers/net.o 00:02:59.104 CXX test/cpp_headers/notify.o 00:02:59.104 CXX test/cpp_headers/nvme.o 00:02:59.104 CXX test/cpp_headers/nvme_intel.o 00:02:59.104 CXX test/cpp_headers/nvme_ocssd.o 00:02:59.104 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:59.104 LINK pci_ut 00:02:59.104 CXX test/cpp_headers/nvme_spec.o 00:02:59.366 CXX test/cpp_headers/nvme_zns.o 00:02:59.366 CXX test/cpp_headers/nvmf_cmd.o 00:02:59.366 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:59.366 CXX test/cpp_headers/nvmf.o 00:02:59.366 LINK test_dma 00:02:59.366 CXX test/cpp_headers/nvmf_spec.o 00:02:59.366 CXX test/cpp_headers/nvmf_transport.o 00:02:59.366 CXX test/cpp_headers/opal.o 00:02:59.366 CXX test/cpp_headers/opal_spec.o 00:02:59.366 CC test/event/event_perf/event_perf.o 00:02:59.366 CC test/event/reactor/reactor.o 00:02:59.366 CC test/event/reactor_perf/reactor_perf.o 00:02:59.366 CXX test/cpp_headers/pci_ids.o 00:02:59.366 CXX test/cpp_headers/pipe.o 00:02:59.366 CC test/event/app_repeat/app_repeat.o 00:02:59.366 CC examples/sock/hello_world/hello_sock.o 00:02:59.366 CXX test/cpp_headers/queue.o 00:02:59.366 CXX test/cpp_headers/reduce.o 00:02:59.627 LINK spdk_bdev 00:02:59.627 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.627 CXX test/cpp_headers/rpc.o 00:02:59.627 LINK spdk_nvme 00:02:59.627 CC examples/idxd/perf/perf.o 00:02:59.627 LINK nvme_fuzz 00:02:59.627 CC examples/thread/thread/thread_ex.o 00:02:59.627 CXX test/cpp_headers/scheduler.o 00:02:59.627 CXX test/cpp_headers/scsi.o 00:02:59.627 CXX test/cpp_headers/scsi_spec.o 00:02:59.627 CXX test/cpp_headers/sock.o 00:02:59.627 CXX test/cpp_headers/stdinc.o 00:02:59.627 CC test/event/scheduler/scheduler.o 00:02:59.627 CXX test/cpp_headers/string.o 00:02:59.627 CXX test/cpp_headers/thread.o 00:02:59.627 CXX test/cpp_headers/trace.o 00:02:59.627 CXX test/cpp_headers/trace_parser.o 00:02:59.627 CC examples/vmd/led/led.o 00:02:59.627 CXX test/cpp_headers/tree.o 00:02:59.627 CXX test/cpp_headers/ublk.o 00:02:59.627 CXX test/cpp_headers/util.o 00:02:59.627 CXX test/cpp_headers/uuid.o 00:02:59.627 CXX test/cpp_headers/version.o 00:02:59.627 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.627 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.627 CXX test/cpp_headers/vhost.o 00:02:59.627 CXX test/cpp_headers/vmd.o 00:02:59.627 LINK reactor 00:02:59.627 CXX test/cpp_headers/xor.o 00:02:59.627 LINK event_perf 00:02:59.627 CXX test/cpp_headers/zipf.o 00:02:59.627 LINK reactor_perf 00:02:59.893 LINK lsvmd 00:02:59.893 LINK mem_callbacks 00:02:59.893 CC app/vhost/vhost.o 00:02:59.893 LINK app_repeat 00:02:59.893 LINK spdk_nvme_perf 00:02:59.893 LINK spdk_nvme_identify 00:02:59.893 LINK vhost_fuzz 00:02:59.893 LINK spdk_top 00:02:59.893 LINK led 00:02:59.893 LINK hello_sock 00:02:59.893 CC test/nvme/reset/reset.o 00:02:59.893 CC test/nvme/sgl/sgl.o 00:02:59.893 CC test/nvme/startup/startup.o 00:02:59.893 CC test/nvme/aer/aer.o 00:02:59.893 CC test/nvme/e2edp/nvme_dp.o 00:02:59.893 CC test/nvme/reserve/reserve.o 00:02:59.893 CC test/nvme/overhead/overhead.o 00:03:00.153 CC test/nvme/err_injection/err_injection.o 00:03:00.153 LINK thread 00:03:00.153 CC test/nvme/simple_copy/simple_copy.o 00:03:00.153 CC test/nvme/connect_stress/connect_stress.o 00:03:00.153 LINK scheduler 00:03:00.153 CC test/accel/dif/dif.o 00:03:00.153 CC test/nvme/boot_partition/boot_partition.o 00:03:00.153 CC test/blobfs/mkfs/mkfs.o 00:03:00.153 CC test/nvme/compliance/nvme_compliance.o 00:03:00.153 CC test/lvol/esnap/esnap.o 00:03:00.153 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.153 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.153 CC test/nvme/cuse/cuse.o 00:03:00.153 CC test/nvme/fdp/fdp.o 00:03:00.153 LINK idxd_perf 00:03:00.153 LINK vhost 00:03:00.153 LINK startup 00:03:00.412 LINK reserve 00:03:00.412 LINK boot_partition 00:03:00.412 LINK err_injection 00:03:00.412 LINK doorbell_aers 00:03:00.412 LINK connect_stress 00:03:00.412 LINK mkfs 00:03:00.412 LINK sgl 00:03:00.412 LINK simple_copy 00:03:00.412 LINK overhead 00:03:00.412 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.412 CC examples/nvme/reconnect/reconnect.o 00:03:00.412 CC examples/nvme/hotplug/hotplug.o 00:03:00.412 LINK memory_ut 00:03:00.412 CC examples/nvme/hello_world/hello_world.o 00:03:00.412 CC examples/nvme/arbitration/arbitration.o 00:03:00.412 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.412 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.412 CC examples/nvme/abort/abort.o 00:03:00.412 LINK nvme_compliance 00:03:00.412 LINK reset 00:03:00.412 LINK fused_ordering 00:03:00.412 LINK nvme_dp 00:03:00.412 LINK fdp 00:03:00.671 LINK aer 00:03:00.671 LINK dif 00:03:00.671 CC examples/accel/perf/accel_perf.o 00:03:00.671 LINK cmb_copy 00:03:00.671 CC examples/blob/hello_world/hello_blob.o 00:03:00.671 LINK pmr_persistence 00:03:00.671 CC examples/blob/cli/blobcli.o 00:03:00.671 LINK hello_world 00:03:00.671 LINK hotplug 00:03:00.929 LINK reconnect 00:03:00.929 LINK arbitration 00:03:00.929 LINK hello_blob 00:03:00.929 LINK abort 00:03:00.929 CC test/bdev/bdevio/bdevio.o 00:03:01.189 LINK nvme_manage 00:03:01.189 LINK accel_perf 00:03:01.189 LINK blobcli 00:03:01.189 LINK iscsi_fuzz 00:03:01.447 LINK bdevio 00:03:01.447 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.447 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.705 LINK cuse 00:03:01.705 LINK hello_bdev 00:03:02.272 LINK bdevperf 00:03:02.838 CC examples/nvmf/nvmf/nvmf.o 00:03:03.096 LINK nvmf 00:03:05.625 LINK esnap 00:03:05.625 00:03:05.625 real 0m42.338s 00:03:05.625 user 7m25.177s 00:03:05.625 sys 1m48.139s 00:03:05.625 02:02:33 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:05.625 02:02:33 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.625 ************************************ 00:03:05.625 END TEST make 00:03:05.625 ************************************ 00:03:05.625 02:02:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.625 02:02:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.625 02:02:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.625 02:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.625 02:02:33 -- pm/common@44 -- $ pid=796986 00:03:05.625 02:02:33 -- pm/common@50 -- $ kill -TERM 796986 00:03:05.625 02:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.625 02:02:33 -- pm/common@44 -- $ pid=796988 00:03:05.625 02:02:33 -- pm/common@50 -- $ kill -TERM 796988 00:03:05.625 02:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.625 02:02:33 -- pm/common@44 -- $ pid=796990 00:03:05.625 02:02:33 -- pm/common@50 -- $ kill -TERM 796990 00:03:05.625 02:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.625 02:02:33 -- pm/common@44 -- $ pid=797017 00:03:05.625 02:02:33 -- pm/common@50 -- $ sudo -E kill -TERM 797017 00:03:05.625 02:02:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.625 02:02:33 -- nvmf/common.sh@7 -- # uname -s 00:03:05.625 02:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.625 02:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.625 02:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.625 02:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.625 02:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.625 02:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.625 02:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.625 02:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.625 02:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.625 02:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.625 02:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.625 02:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.625 02:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.625 02:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.625 02:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.625 02:02:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.625 02:02:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.625 02:02:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.625 02:02:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.625 02:02:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.625 02:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.625 02:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.625 02:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.625 02:02:33 -- paths/export.sh@5 -- # export PATH 00:03:05.625 02:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.625 02:02:33 -- nvmf/common.sh@47 -- # : 0 00:03:05.625 02:02:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.625 02:02:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.625 02:02:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.625 02:02:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.625 02:02:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.625 02:02:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.625 02:02:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.625 02:02:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.625 02:02:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.625 02:02:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.625 02:02:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.625 02:02:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.625 02:02:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.625 02:02:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.625 02:02:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.625 02:02:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.625 02:02:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.625 02:02:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.625 02:02:33 -- spdk/autotest.sh@48 -- # udevadm_pid=868362 00:03:05.625 02:02:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.625 02:02:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.625 02:02:33 -- pm/common@17 -- # local monitor 00:03:05.625 02:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@21 -- # date +%s 00:03:05.625 02:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.625 02:02:33 -- pm/common@21 -- # date +%s 00:03:05.625 02:02:33 -- pm/common@25 -- # sleep 1 00:03:05.625 02:02:33 -- pm/common@21 -- # date +%s 00:03:05.625 02:02:33 -- pm/common@21 -- # date +%s 00:03:05.625 02:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722038553 00:03:05.625 02:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722038553 00:03:05.625 02:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722038553 00:03:05.625 02:02:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722038553 00:03:05.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722038553_collect-vmstat.pm.log 00:03:05.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722038553_collect-cpu-load.pm.log 00:03:05.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722038553_collect-cpu-temp.pm.log 00:03:05.886 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722038553_collect-bmc-pm.bmc.pm.log 00:03:06.826 02:02:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.826 02:02:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.826 02:02:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:06.826 02:02:34 -- common/autotest_common.sh@10 -- # set +x 00:03:06.826 02:02:34 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.826 02:02:34 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:06.826 02:02:34 -- common/autotest_common.sh@10 -- # set +x 00:03:06.826 02:02:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:06.826 02:02:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.826 02:02:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.826 02:02:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:06.826 02:02:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.826 02:02:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.826 02:02:34 -- common/autotest_common.sh@1455 -- # uname 00:03:06.826 02:02:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:06.826 02:02:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.826 02:02:34 -- common/autotest_common.sh@1475 -- # uname 00:03:06.826 02:02:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:06.827 02:02:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:06.827 02:02:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:06.827 02:02:34 -- spdk/autotest.sh@72 -- # hash lcov 00:03:06.827 02:02:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:06.827 02:02:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:06.827 --rc lcov_branch_coverage=1 00:03:06.827 --rc lcov_function_coverage=1 00:03:06.827 --rc genhtml_branch_coverage=1 00:03:06.827 --rc genhtml_function_coverage=1 00:03:06.827 --rc genhtml_legend=1 00:03:06.827 --rc geninfo_all_blocks=1 00:03:06.827 ' 00:03:06.827 02:02:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:06.827 --rc lcov_branch_coverage=1 00:03:06.827 --rc lcov_function_coverage=1 00:03:06.827 --rc genhtml_branch_coverage=1 00:03:06.827 --rc genhtml_function_coverage=1 00:03:06.827 --rc genhtml_legend=1 00:03:06.827 --rc geninfo_all_blocks=1 00:03:06.827 ' 00:03:06.827 02:02:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:06.827 --rc lcov_branch_coverage=1 00:03:06.827 --rc lcov_function_coverage=1 00:03:06.827 --rc genhtml_branch_coverage=1 00:03:06.827 --rc genhtml_function_coverage=1 00:03:06.827 --rc genhtml_legend=1 00:03:06.827 --rc geninfo_all_blocks=1 00:03:06.827 --no-external' 00:03:06.827 02:02:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:06.827 --rc lcov_branch_coverage=1 00:03:06.827 --rc lcov_function_coverage=1 00:03:06.827 --rc genhtml_branch_coverage=1 00:03:06.827 --rc genhtml_function_coverage=1 00:03:06.827 --rc genhtml_legend=1 00:03:06.827 --rc geninfo_all_blocks=1 00:03:06.827 --no-external' 00:03:06.827 02:02:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.827 lcov: LCOV version 1.14 00:03:06.827 02:02:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:08.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:08.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:08.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:08.735 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:23.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.620 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:41.711 02:03:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:41.711 02:03:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:41.711 02:03:08 -- common/autotest_common.sh@10 -- # set +x 00:03:41.711 02:03:08 -- spdk/autotest.sh@91 -- # rm -f 00:03:41.711 02:03:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.711 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:41.711 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.711 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.711 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.711 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.711 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.711 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.711 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.711 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.711 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.711 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.711 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.711 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.711 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.711 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.711 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.711 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.711 02:03:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:41.711 02:03:09 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.711 02:03:09 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.711 02:03:09 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.711 02:03:09 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.711 02:03:09 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.711 02:03:09 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.711 02:03:09 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.711 02:03:09 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.711 02:03:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:41.711 02:03:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.711 02:03:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.711 02:03:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:41.711 02:03:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:41.711 02:03:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.970 No valid GPT data, bailing 00:03:41.970 02:03:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.970 02:03:09 -- scripts/common.sh@391 -- # pt= 00:03:41.970 02:03:09 -- scripts/common.sh@392 -- # return 1 00:03:41.970 02:03:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.970 1+0 records in 00:03:41.970 1+0 records out 00:03:41.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00249518 s, 420 MB/s 00:03:41.970 02:03:09 -- spdk/autotest.sh@118 -- # sync 00:03:41.970 02:03:09 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.970 02:03:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.970 02:03:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.871 02:03:11 -- spdk/autotest.sh@124 -- # uname -s 00:03:43.871 02:03:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:43.871 02:03:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.871 02:03:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.871 02:03:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.871 02:03:11 -- common/autotest_common.sh@10 -- # set +x 00:03:43.871 ************************************ 00:03:43.871 START TEST setup.sh 00:03:43.871 ************************************ 00:03:43.871 02:03:11 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.871 * Looking for test storage... 00:03:43.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.871 02:03:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:43.871 02:03:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:43.871 02:03:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.871 02:03:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.871 02:03:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.871 02:03:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.871 ************************************ 00:03:43.871 START TEST acl 00:03:43.871 ************************************ 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.871 * Looking for test storage... 00:03:43.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.871 02:03:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.871 02:03:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.871 02:03:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.871 02:03:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.778 02:03:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.778 02:03:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.778 02:03:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.778 02:03:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.778 02:03:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.778 02:03:13 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:46.712 Hugepages 00:03:46.712 node hugesize free / total 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 00:03:46.712 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.712 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:46.713 02:03:14 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:46.713 02:03:14 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.713 02:03:14 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.713 02:03:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.713 ************************************ 00:03:46.713 START TEST denied 00:03:46.713 ************************************ 00:03:46.713 02:03:14 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:46.713 02:03:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:46.713 02:03:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:46.713 02:03:14 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:46.713 02:03:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.713 02:03:14 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.091 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.091 02:03:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.670 00:03:50.670 real 0m3.915s 00:03:50.670 user 0m1.149s 00:03:50.670 sys 0m1.846s 00:03:50.670 02:03:18 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.670 02:03:18 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.670 ************************************ 00:03:50.670 END TEST denied 00:03:50.670 ************************************ 00:03:50.670 02:03:18 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.670 02:03:18 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.670 02:03:18 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.670 02:03:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.670 ************************************ 00:03:50.670 START TEST allowed 00:03:50.670 ************************************ 00:03:50.670 02:03:18 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:50.670 02:03:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.670 02:03:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:50.670 02:03:18 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:50.670 02:03:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.670 02:03:18 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.206 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.206 02:03:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:53.206 02:03:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.206 02:03:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.206 02:03:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.206 02:03:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.583 00:03:54.583 real 0m3.900s 00:03:54.583 user 0m1.028s 00:03:54.583 sys 0m1.691s 00:03:54.583 02:03:22 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.583 02:03:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:54.583 ************************************ 00:03:54.583 END TEST allowed 00:03:54.583 ************************************ 00:03:54.583 00:03:54.583 real 0m10.662s 00:03:54.583 user 0m3.278s 00:03:54.583 sys 0m5.349s 00:03:54.583 02:03:22 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.583 02:03:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.583 ************************************ 00:03:54.583 END TEST acl 00:03:54.583 ************************************ 00:03:54.583 02:03:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.583 02:03:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.583 02:03:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.583 02:03:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.583 ************************************ 00:03:54.583 START TEST hugepages 00:03:54.583 ************************************ 00:03:54.583 02:03:22 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.583 * Looking for test storage... 00:03:54.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41168964 kB' 'MemAvailable: 45703388 kB' 'Buffers: 2704 kB' 'Cached: 12756920 kB' 'SwapCached: 0 kB' 'Active: 8751672 kB' 'Inactive: 4516068 kB' 'Active(anon): 8356048 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511444 kB' 'Mapped: 206864 kB' 'Shmem: 7847932 kB' 'KReclaimable: 233140 kB' 'Slab: 612432 kB' 'SReclaimable: 233140 kB' 'SUnreclaim: 379292 kB' 'KernelStack: 12768 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9438056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.583 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.584 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.585 02:03:22 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:54.585 02:03:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.585 02:03:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.585 02:03:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.843 ************************************ 00:03:54.843 START TEST default_setup 00:03:54.843 ************************************ 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.843 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.844 02:03:22 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.777 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.777 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.035 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.035 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.972 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43164884 kB' 'MemAvailable: 47699252 kB' 'Buffers: 2704 kB' 'Cached: 12757020 kB' 'SwapCached: 0 kB' 'Active: 8770912 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530480 kB' 'Mapped: 206996 kB' 'Shmem: 7848032 kB' 'KReclaimable: 233028 kB' 'Slab: 612140 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379112 kB' 'KernelStack: 12752 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.972 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.973 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43175968 kB' 'MemAvailable: 47710336 kB' 'Buffers: 2704 kB' 'Cached: 12757024 kB' 'SwapCached: 0 kB' 'Active: 8770684 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530252 kB' 'Mapped: 206976 kB' 'Shmem: 7848036 kB' 'KReclaimable: 233028 kB' 'Slab: 612116 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379088 kB' 'KernelStack: 12768 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.974 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.975 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43176488 kB' 'MemAvailable: 47710856 kB' 'Buffers: 2704 kB' 'Cached: 12757036 kB' 'SwapCached: 0 kB' 'Active: 8770604 kB' 'Inactive: 4516068 kB' 'Active(anon): 8374980 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530192 kB' 'Mapped: 206976 kB' 'Shmem: 7848048 kB' 'KReclaimable: 233028 kB' 'Slab: 612184 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379156 kB' 'KernelStack: 12752 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.976 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.977 nr_hugepages=1024 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.977 resv_hugepages=0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.977 surplus_hugepages=0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.977 anon_hugepages=0 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.977 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.978 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43176488 kB' 'MemAvailable: 47710856 kB' 'Buffers: 2704 kB' 'Cached: 12757064 kB' 'SwapCached: 0 kB' 'Active: 8770684 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530264 kB' 'Mapped: 206976 kB' 'Shmem: 7848076 kB' 'KReclaimable: 233028 kB' 'Slab: 612184 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379156 kB' 'KernelStack: 12784 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.238 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.239 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20189940 kB' 'MemUsed: 12687000 kB' 'SwapCached: 0 kB' 'Active: 5908628 kB' 'Inactive: 3429452 kB' 'Active(anon): 5636696 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260464 kB' 'Mapped: 81568 kB' 'AnonPages: 80800 kB' 'Shmem: 5559080 kB' 'KernelStack: 7080 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 321928 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.240 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.241 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.242 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.242 node0=1024 expecting 1024 00:03:57.242 02:03:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.242 00:03:57.242 real 0m2.410s 00:03:57.242 user 0m0.677s 00:03:57.242 sys 0m0.838s 00:03:57.242 02:03:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.242 02:03:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:57.242 ************************************ 00:03:57.242 END TEST default_setup 00:03:57.242 ************************************ 00:03:57.242 02:03:25 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:57.242 02:03:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.242 02:03:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.242 02:03:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.242 ************************************ 00:03:57.242 START TEST per_node_1G_alloc 00:03:57.242 ************************************ 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.242 02:03:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.178 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.179 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.179 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.179 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.179 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.179 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.179 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.179 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.179 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.179 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.179 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.179 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.179 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.179 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.179 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.179 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.179 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43194376 kB' 'MemAvailable: 47728744 kB' 'Buffers: 2704 kB' 'Cached: 12757140 kB' 'SwapCached: 0 kB' 'Active: 8771368 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375744 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530852 kB' 'Mapped: 207084 kB' 'Shmem: 7848152 kB' 'KReclaimable: 233028 kB' 'Slab: 612124 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379096 kB' 'KernelStack: 12784 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.445 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.446 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43194392 kB' 'MemAvailable: 47728760 kB' 'Buffers: 2704 kB' 'Cached: 12757144 kB' 'SwapCached: 0 kB' 'Active: 8771012 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530512 kB' 'Mapped: 207064 kB' 'Shmem: 7848156 kB' 'KReclaimable: 233028 kB' 'Slab: 612116 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379088 kB' 'KernelStack: 12784 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.447 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.448 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43196328 kB' 'MemAvailable: 47730696 kB' 'Buffers: 2704 kB' 'Cached: 12757160 kB' 'SwapCached: 0 kB' 'Active: 8770896 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375272 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530352 kB' 'Mapped: 206988 kB' 'Shmem: 7848172 kB' 'KReclaimable: 233028 kB' 'Slab: 612128 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379100 kB' 'KernelStack: 12784 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.449 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.450 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.451 nr_hugepages=1024 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.451 resv_hugepages=0 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.451 surplus_hugepages=0 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.451 anon_hugepages=0 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43196328 kB' 'MemAvailable: 47730696 kB' 'Buffers: 2704 kB' 'Cached: 12757184 kB' 'SwapCached: 0 kB' 'Active: 8770936 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375312 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530356 kB' 'Mapped: 206988 kB' 'Shmem: 7848196 kB' 'KReclaimable: 233028 kB' 'Slab: 612128 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379100 kB' 'KernelStack: 12784 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.451 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.452 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21249012 kB' 'MemUsed: 11627928 kB' 'SwapCached: 0 kB' 'Active: 5908388 kB' 'Inactive: 3429452 kB' 'Active(anon): 5636456 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260504 kB' 'Mapped: 81568 kB' 'AnonPages: 80512 kB' 'Shmem: 5559120 kB' 'KernelStack: 7080 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 322000 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.453 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21948008 kB' 'MemUsed: 5716764 kB' 'SwapCached: 0 kB' 'Active: 2862604 kB' 'Inactive: 1086616 kB' 'Active(anon): 2738912 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1086616 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3499412 kB' 'Mapped: 125420 kB' 'AnonPages: 449892 kB' 'Shmem: 2289104 kB' 'KernelStack: 5720 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134460 kB' 'Slab: 290128 kB' 'SReclaimable: 134460 kB' 'SUnreclaim: 155668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.454 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.455 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.456 node0=512 expecting 512 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:58.456 node1=512 expecting 512 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:58.456 00:03:58.456 real 0m1.365s 00:03:58.456 user 0m0.584s 00:03:58.456 sys 0m0.743s 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.456 02:03:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.456 ************************************ 00:03:58.456 END TEST per_node_1G_alloc 00:03:58.456 ************************************ 00:03:58.715 02:03:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:58.715 02:03:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.715 02:03:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.715 02:03:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.715 ************************************ 00:03:58.715 START TEST even_2G_alloc 00:03:58.715 ************************************ 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.715 02:03:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.653 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.653 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.653 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.653 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.653 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.653 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.653 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.653 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.653 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.653 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.653 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.653 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.653 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.653 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.653 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.653 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.653 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43195800 kB' 'MemAvailable: 47730168 kB' 'Buffers: 2704 kB' 'Cached: 12757268 kB' 'SwapCached: 0 kB' 'Active: 8770912 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530228 kB' 'Mapped: 207496 kB' 'Shmem: 7848280 kB' 'KReclaimable: 233028 kB' 'Slab: 612172 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379144 kB' 'KernelStack: 12736 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.917 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.918 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43195408 kB' 'MemAvailable: 47729776 kB' 'Buffers: 2704 kB' 'Cached: 12757272 kB' 'SwapCached: 0 kB' 'Active: 8771224 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375600 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530592 kB' 'Mapped: 207076 kB' 'Shmem: 7848284 kB' 'KReclaimable: 233028 kB' 'Slab: 612172 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379144 kB' 'KernelStack: 12784 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.919 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.920 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43195408 kB' 'MemAvailable: 47729776 kB' 'Buffers: 2704 kB' 'Cached: 12757288 kB' 'SwapCached: 0 kB' 'Active: 8771068 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375444 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530428 kB' 'Mapped: 207000 kB' 'Shmem: 7848300 kB' 'KReclaimable: 233028 kB' 'Slab: 612180 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379152 kB' 'KernelStack: 12784 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.921 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.922 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.923 nr_hugepages=1024 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.923 resv_hugepages=0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.923 surplus_hugepages=0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.923 anon_hugepages=0 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43195408 kB' 'MemAvailable: 47729776 kB' 'Buffers: 2704 kB' 'Cached: 12757308 kB' 'SwapCached: 0 kB' 'Active: 8771456 kB' 'Inactive: 4516068 kB' 'Active(anon): 8375832 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530748 kB' 'Mapped: 207000 kB' 'Shmem: 7848320 kB' 'KReclaimable: 233028 kB' 'Slab: 612180 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379152 kB' 'KernelStack: 12800 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9459760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.923 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.924 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21250604 kB' 'MemUsed: 11626336 kB' 'SwapCached: 0 kB' 'Active: 5907980 kB' 'Inactive: 3429452 kB' 'Active(anon): 5636048 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260504 kB' 'Mapped: 81568 kB' 'AnonPages: 80080 kB' 'Shmem: 5559120 kB' 'KernelStack: 7048 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 322016 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.925 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.926 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21944300 kB' 'MemUsed: 5720472 kB' 'SwapCached: 0 kB' 'Active: 2863020 kB' 'Inactive: 1086616 kB' 'Active(anon): 2739328 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1086616 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3499552 kB' 'Mapped: 125432 kB' 'AnonPages: 450164 kB' 'Shmem: 2289244 kB' 'KernelStack: 5720 kB' 'PageTables: 5152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134460 kB' 'Slab: 290164 kB' 'SReclaimable: 134460 kB' 'SUnreclaim: 155704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.927 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.187 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.188 node0=512 expecting 512 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:00.188 node1=512 expecting 512 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.188 00:04:00.188 real 0m1.455s 00:04:00.188 user 0m0.644s 00:04:00.188 sys 0m0.776s 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.188 02:03:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.188 ************************************ 00:04:00.188 END TEST even_2G_alloc 00:04:00.188 ************************************ 00:04:00.188 02:03:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.188 02:03:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.188 02:03:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.188 02:03:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.188 ************************************ 00:04:00.188 START TEST odd_alloc 00:04:00.188 ************************************ 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.188 02:03:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.124 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.124 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.124 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.124 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.124 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.124 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.124 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.124 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.124 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.124 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.124 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.124 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.124 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.124 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.124 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.124 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.124 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43278752 kB' 'MemAvailable: 47813120 kB' 'Buffers: 2704 kB' 'Cached: 12757408 kB' 'SwapCached: 0 kB' 'Active: 8768608 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528016 kB' 'Mapped: 206260 kB' 'Shmem: 7848420 kB' 'KReclaimable: 233028 kB' 'Slab: 612108 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379080 kB' 'KernelStack: 12768 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9448756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.390 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.391 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43278204 kB' 'MemAvailable: 47812572 kB' 'Buffers: 2704 kB' 'Cached: 12757412 kB' 'SwapCached: 0 kB' 'Active: 8769540 kB' 'Inactive: 4516068 kB' 'Active(anon): 8373916 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528908 kB' 'Mapped: 206252 kB' 'Shmem: 7848424 kB' 'KReclaimable: 233028 kB' 'Slab: 612080 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379052 kB' 'KernelStack: 12832 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9445784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.392 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.393 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43276992 kB' 'MemAvailable: 47811360 kB' 'Buffers: 2704 kB' 'Cached: 12757424 kB' 'SwapCached: 0 kB' 'Active: 8769376 kB' 'Inactive: 4516068 kB' 'Active(anon): 8373752 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528688 kB' 'Mapped: 206252 kB' 'Shmem: 7848436 kB' 'KReclaimable: 233028 kB' 'Slab: 612064 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 379036 kB' 'KernelStack: 13008 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9447168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.394 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.395 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:01.396 nr_hugepages=1025 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.396 resv_hugepages=0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.396 surplus_hugepages=0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.396 anon_hugepages=0 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43274832 kB' 'MemAvailable: 47809200 kB' 'Buffers: 2704 kB' 'Cached: 12757448 kB' 'SwapCached: 0 kB' 'Active: 8769088 kB' 'Inactive: 4516068 kB' 'Active(anon): 8373464 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528320 kB' 'Mapped: 206252 kB' 'Shmem: 7848460 kB' 'KReclaimable: 233028 kB' 'Slab: 612024 kB' 'SReclaimable: 233028 kB' 'SUnreclaim: 378996 kB' 'KernelStack: 13024 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9447188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.396 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.397 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21331128 kB' 'MemUsed: 11545812 kB' 'SwapCached: 0 kB' 'Active: 5907488 kB' 'Inactive: 3429452 kB' 'Active(anon): 5635556 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260584 kB' 'Mapped: 80964 kB' 'AnonPages: 79604 kB' 'Shmem: 5559200 kB' 'KernelStack: 7016 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 321860 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.398 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21941304 kB' 'MemUsed: 5723468 kB' 'SwapCached: 0 kB' 'Active: 2861280 kB' 'Inactive: 1086616 kB' 'Active(anon): 2737588 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1086616 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3499592 kB' 'Mapped: 125212 kB' 'AnonPages: 448336 kB' 'Shmem: 2289284 kB' 'KernelStack: 5864 kB' 'PageTables: 6100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134460 kB' 'Slab: 290184 kB' 'SReclaimable: 134460 kB' 'SUnreclaim: 155724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.399 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.400 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:01.401 node0=512 expecting 513 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:01.401 node1=513 expecting 512 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:01.401 00:04:01.401 real 0m1.398s 00:04:01.401 user 0m0.627s 00:04:01.401 sys 0m0.732s 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.401 02:03:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.401 ************************************ 00:04:01.401 END TEST odd_alloc 00:04:01.401 ************************************ 00:04:01.661 02:03:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:01.661 02:03:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.661 02:03:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.661 02:03:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.661 ************************************ 00:04:01.661 START TEST custom_alloc 00:04:01.661 ************************************ 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.661 02:03:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.598 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.598 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.598 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.598 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.598 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.598 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.598 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.598 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.598 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.598 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.598 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.598 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.598 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.598 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.598 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.598 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.598 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42225684 kB' 'MemAvailable: 46760084 kB' 'Buffers: 2704 kB' 'Cached: 12757536 kB' 'SwapCached: 0 kB' 'Active: 8767864 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372240 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526408 kB' 'Mapped: 206216 kB' 'Shmem: 7848548 kB' 'KReclaimable: 233092 kB' 'Slab: 611976 kB' 'SReclaimable: 233092 kB' 'SUnreclaim: 378884 kB' 'KernelStack: 12736 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9444896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.864 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.865 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42226012 kB' 'MemAvailable: 46760408 kB' 'Buffers: 2704 kB' 'Cached: 12757540 kB' 'SwapCached: 0 kB' 'Active: 8767704 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372080 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526776 kB' 'Mapped: 206192 kB' 'Shmem: 7848552 kB' 'KReclaimable: 233084 kB' 'Slab: 611944 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378860 kB' 'KernelStack: 12768 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9444916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.866 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.867 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42226544 kB' 'MemAvailable: 46760940 kB' 'Buffers: 2704 kB' 'Cached: 12757556 kB' 'SwapCached: 0 kB' 'Active: 8767708 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372084 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526700 kB' 'Mapped: 206192 kB' 'Shmem: 7848568 kB' 'KReclaimable: 233084 kB' 'Slab: 612028 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378944 kB' 'KernelStack: 12768 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9444936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.868 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.869 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:02.870 nr_hugepages=1536 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.870 resv_hugepages=0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.870 surplus_hugepages=0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.870 anon_hugepages=0 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42226544 kB' 'MemAvailable: 46760940 kB' 'Buffers: 2704 kB' 'Cached: 12757576 kB' 'SwapCached: 0 kB' 'Active: 8767700 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372076 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526668 kB' 'Mapped: 206192 kB' 'Shmem: 7848588 kB' 'KReclaimable: 233084 kB' 'Slab: 612028 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378944 kB' 'KernelStack: 12752 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9444956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.870 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.871 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21320072 kB' 'MemUsed: 11556868 kB' 'SwapCached: 0 kB' 'Active: 5907472 kB' 'Inactive: 3429452 kB' 'Active(anon): 5635540 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260712 kB' 'Mapped: 80964 kB' 'AnonPages: 79388 kB' 'Shmem: 5559328 kB' 'KernelStack: 7096 kB' 'PageTables: 2756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 321820 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.872 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.873 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 20906472 kB' 'MemUsed: 6758300 kB' 'SwapCached: 0 kB' 'Active: 2860280 kB' 'Inactive: 1086616 kB' 'Active(anon): 2736588 kB' 'Inactive(anon): 0 kB' 'Active(file): 123692 kB' 'Inactive(file): 1086616 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3499592 kB' 'Mapped: 125228 kB' 'AnonPages: 447308 kB' 'Shmem: 2289284 kB' 'KernelStack: 5672 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134516 kB' 'Slab: 290208 kB' 'SReclaimable: 134516 kB' 'SUnreclaim: 155692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.874 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.875 node0=512 expecting 512 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:02.875 node1=1024 expecting 1024 00:04:02.875 02:03:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:02.875 00:04:02.876 real 0m1.387s 00:04:02.876 user 0m0.574s 00:04:02.876 sys 0m0.774s 00:04:02.876 02:03:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.876 02:03:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.876 ************************************ 00:04:02.876 END TEST custom_alloc 00:04:02.876 ************************************ 00:04:02.876 02:03:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.876 02:03:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.876 02:03:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.876 02:03:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.876 ************************************ 00:04:02.876 START TEST no_shrink_alloc 00:04:02.876 ************************************ 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.876 02:03:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.257 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.257 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.257 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.257 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.257 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.257 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.257 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.257 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.257 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.257 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.257 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.257 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.257 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.257 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.257 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.257 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.257 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.257 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43268392 kB' 'MemAvailable: 47802788 kB' 'Buffers: 2704 kB' 'Cached: 12757660 kB' 'SwapCached: 0 kB' 'Active: 8768180 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372556 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527076 kB' 'Mapped: 206328 kB' 'Shmem: 7848672 kB' 'KReclaimable: 233084 kB' 'Slab: 611940 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378856 kB' 'KernelStack: 12736 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.258 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43268700 kB' 'MemAvailable: 47803096 kB' 'Buffers: 2704 kB' 'Cached: 12757660 kB' 'SwapCached: 0 kB' 'Active: 8768784 kB' 'Inactive: 4516068 kB' 'Active(anon): 8373160 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527692 kB' 'Mapped: 206748 kB' 'Shmem: 7848672 kB' 'KReclaimable: 233084 kB' 'Slab: 611940 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378856 kB' 'KernelStack: 12736 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9447060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.259 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.260 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43261984 kB' 'MemAvailable: 47796380 kB' 'Buffers: 2704 kB' 'Cached: 12757676 kB' 'SwapCached: 0 kB' 'Active: 8772396 kB' 'Inactive: 4516068 kB' 'Active(anon): 8376772 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531296 kB' 'Mapped: 206636 kB' 'Shmem: 7848688 kB' 'KReclaimable: 233084 kB' 'Slab: 611936 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378852 kB' 'KernelStack: 12736 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9450516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.261 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.262 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.525 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.526 nr_hugepages=1024 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.526 resv_hugepages=0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.526 surplus_hugepages=0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.526 anon_hugepages=0 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43264344 kB' 'MemAvailable: 47798740 kB' 'Buffers: 2704 kB' 'Cached: 12757700 kB' 'SwapCached: 0 kB' 'Active: 8773684 kB' 'Inactive: 4516068 kB' 'Active(anon): 8378060 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532624 kB' 'Mapped: 207044 kB' 'Shmem: 7848712 kB' 'KReclaimable: 233084 kB' 'Slab: 611936 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378852 kB' 'KernelStack: 12784 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9451488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.526 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.527 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20263520 kB' 'MemUsed: 12613420 kB' 'SwapCached: 0 kB' 'Active: 5910548 kB' 'Inactive: 3429452 kB' 'Active(anon): 5638616 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260808 kB' 'Mapped: 81400 kB' 'AnonPages: 82344 kB' 'Shmem: 5559424 kB' 'KernelStack: 7016 kB' 'PageTables: 2596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 321764 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.528 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.529 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.530 node0=1024 expecting 1024 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.530 02:03:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.909 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.909 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.909 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.909 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.909 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.909 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.909 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.909 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.909 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.909 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.909 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.909 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.909 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.909 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.909 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.909 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.909 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.909 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43251348 kB' 'MemAvailable: 47785744 kB' 'Buffers: 2704 kB' 'Cached: 12757776 kB' 'SwapCached: 0 kB' 'Active: 8767948 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372324 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526752 kB' 'Mapped: 206204 kB' 'Shmem: 7848788 kB' 'KReclaimable: 233084 kB' 'Slab: 611936 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378852 kB' 'KernelStack: 12752 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.909 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.910 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43251204 kB' 'MemAvailable: 47785600 kB' 'Buffers: 2704 kB' 'Cached: 12757780 kB' 'SwapCached: 0 kB' 'Active: 8768316 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527128 kB' 'Mapped: 206284 kB' 'Shmem: 7848792 kB' 'KReclaimable: 233084 kB' 'Slab: 612020 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378936 kB' 'KernelStack: 12736 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.911 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.912 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43250684 kB' 'MemAvailable: 47785080 kB' 'Buffers: 2704 kB' 'Cached: 12757800 kB' 'SwapCached: 0 kB' 'Active: 8768132 kB' 'Inactive: 4516068 kB' 'Active(anon): 8372508 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526944 kB' 'Mapped: 206204 kB' 'Shmem: 7848812 kB' 'KReclaimable: 233084 kB' 'Slab: 611980 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378896 kB' 'KernelStack: 12768 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9445796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.913 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.914 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.915 nr_hugepages=1024 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.915 resv_hugepages=0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.915 surplus_hugepages=0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.915 anon_hugepages=0 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43251292 kB' 'MemAvailable: 47785688 kB' 'Buffers: 2704 kB' 'Cached: 12757820 kB' 'SwapCached: 0 kB' 'Active: 8768628 kB' 'Inactive: 4516068 kB' 'Active(anon): 8373004 kB' 'Inactive(anon): 0 kB' 'Active(file): 395624 kB' 'Inactive(file): 4516068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527376 kB' 'Mapped: 206204 kB' 'Shmem: 7848832 kB' 'KReclaimable: 233084 kB' 'Slab: 611980 kB' 'SReclaimable: 233084 kB' 'SUnreclaim: 378896 kB' 'KernelStack: 12768 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9448176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1896028 kB' 'DirectMap2M: 14800896 kB' 'DirectMap1G: 52428800 kB' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.915 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.916 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20257336 kB' 'MemUsed: 12619604 kB' 'SwapCached: 0 kB' 'Active: 5908552 kB' 'Inactive: 3429452 kB' 'Active(anon): 5636620 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3429452 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9260824 kB' 'Mapped: 80964 kB' 'AnonPages: 80308 kB' 'Shmem: 5559440 kB' 'KernelStack: 7352 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98568 kB' 'Slab: 321752 kB' 'SReclaimable: 98568 kB' 'SUnreclaim: 223184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.917 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.918 node0=1024 expecting 1024 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.918 00:04:05.918 real 0m2.967s 00:04:05.918 user 0m1.181s 00:04:05.918 sys 0m1.712s 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.918 02:03:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.918 ************************************ 00:04:05.918 END TEST no_shrink_alloc 00:04:05.918 ************************************ 00:04:05.918 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.918 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.918 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.918 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.918 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.919 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.919 02:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.919 02:03:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.919 00:04:05.919 real 0m11.362s 00:04:05.919 user 0m4.462s 00:04:05.919 sys 0m5.800s 00:04:05.919 02:03:34 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.919 02:03:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.919 ************************************ 00:04:05.919 END TEST hugepages 00:04:05.919 ************************************ 00:04:05.919 02:03:34 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.919 02:03:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.919 02:03:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.919 02:03:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.919 ************************************ 00:04:05.919 START TEST driver 00:04:05.919 ************************************ 00:04:05.919 02:03:34 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.178 * Looking for test storage... 00:04:06.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.178 02:03:34 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.178 02:03:34 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.178 02:03:34 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.714 02:03:36 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.714 02:03:36 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.714 02:03:36 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.714 02:03:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.714 ************************************ 00:04:08.714 START TEST guess_driver 00:04:08.714 ************************************ 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.714 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.715 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.715 Looking for driver=vfio-pci 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.715 02:03:36 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.654 02:03:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.594 02:03:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.594 02:03:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.594 02:03:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.853 02:03:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.853 02:03:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.853 02:03:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.853 02:03:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.390 00:04:13.390 real 0m4.727s 00:04:13.390 user 0m1.048s 00:04:13.390 sys 0m1.788s 00:04:13.390 02:03:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.390 02:03:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.390 ************************************ 00:04:13.390 END TEST guess_driver 00:04:13.390 ************************************ 00:04:13.390 00:04:13.390 real 0m7.152s 00:04:13.390 user 0m1.579s 00:04:13.390 sys 0m2.717s 00:04:13.390 02:03:41 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.390 02:03:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.390 ************************************ 00:04:13.390 END TEST driver 00:04:13.390 ************************************ 00:04:13.390 02:03:41 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.390 02:03:41 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.390 02:03:41 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.390 02:03:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.390 ************************************ 00:04:13.390 START TEST devices 00:04:13.390 ************************************ 00:04:13.390 02:03:41 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.390 * Looking for test storage... 00:04:13.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.390 02:03:41 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.390 02:03:41 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.390 02:03:41 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.390 02:03:41 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.771 02:03:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:14.772 02:03:42 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:14.772 No valid GPT data, bailing 00:04:14.772 02:03:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:14.772 02:03:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:14.772 02:03:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.772 02:03:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.772 ************************************ 00:04:14.772 START TEST nvme_mount 00:04:14.772 ************************************ 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.772 02:03:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.708 Creating new GPT entries in memory. 00:04:15.708 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.708 other utilities. 00:04:15.708 02:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.708 02:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.708 02:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.708 02:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.708 02:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:16.644 Creating new GPT entries in memory. 00:04:16.644 The operation has completed successfully. 00:04:16.644 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.644 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.644 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 889046 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.902 02:03:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.838 02:03:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.099 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.099 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.358 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.358 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.358 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.358 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.358 02:03:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:19.739 02:03:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.740 02:03:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.740 02:03:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:21.121 02:03:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.121 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.121 00:04:21.121 real 0m6.289s 00:04:21.121 user 0m1.512s 00:04:21.121 sys 0m2.366s 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.121 02:03:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:21.121 ************************************ 00:04:21.121 END TEST nvme_mount 00:04:21.121 ************************************ 00:04:21.121 02:03:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:21.121 02:03:49 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.121 02:03:49 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.121 02:03:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.121 ************************************ 00:04:21.121 START TEST dm_mount 00:04:21.121 ************************************ 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.121 02:03:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:22.063 Creating new GPT entries in memory. 00:04:22.063 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.063 other utilities. 00:04:22.063 02:03:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.063 02:03:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.063 02:03:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.063 02:03:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.063 02:03:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.032 Creating new GPT entries in memory. 00:04:23.032 The operation has completed successfully. 00:04:23.032 02:03:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.032 02:03:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.032 02:03:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.032 02:03:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.032 02:03:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:23.985 The operation has completed successfully. 00:04:23.985 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.985 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.985 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 891436 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.243 02:03:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.192 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.451 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.451 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:25.451 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.451 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.451 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.452 02:03:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.390 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.650 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.650 00:04:26.650 real 0m5.635s 00:04:26.650 user 0m0.939s 00:04:26.650 sys 0m1.552s 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.650 02:03:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.650 ************************************ 00:04:26.650 END TEST dm_mount 00:04:26.650 ************************************ 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.650 02:03:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.908 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.908 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.908 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.908 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.908 02:03:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.908 00:04:26.908 real 0m13.780s 00:04:26.908 user 0m3.093s 00:04:26.908 sys 0m4.897s 00:04:26.908 02:03:55 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.909 02:03:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.909 ************************************ 00:04:26.909 END TEST devices 00:04:26.909 ************************************ 00:04:26.909 00:04:26.909 real 0m43.200s 00:04:26.909 user 0m12.518s 00:04:26.909 sys 0m18.915s 00:04:26.909 02:03:55 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.909 02:03:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.909 ************************************ 00:04:26.909 END TEST setup.sh 00:04:26.909 ************************************ 00:04:27.166 02:03:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.104 Hugepages 00:04:28.104 node hugesize free / total 00:04:28.104 node0 1048576kB 0 / 0 00:04:28.104 node0 2048kB 2048 / 2048 00:04:28.104 node1 1048576kB 0 / 0 00:04:28.104 node1 2048kB 0 / 0 00:04:28.104 00:04:28.104 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.104 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:28.104 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:28.104 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:28.104 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:28.104 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:28.105 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:28.105 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:28.105 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:28.105 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:28.105 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:28.105 02:03:56 -- spdk/autotest.sh@130 -- # uname -s 00:04:28.105 02:03:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:28.105 02:03:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:28.105 02:03:56 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.481 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.481 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.481 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.420 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.420 02:03:58 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:31.801 02:03:59 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:31.801 02:03:59 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:31.801 02:03:59 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.801 02:03:59 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:31.801 02:03:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:31.801 02:03:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:31.801 02:03:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.801 02:03:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.801 02:03:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:31.801 02:03:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:31.801 02:03:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:31.801 02:03:59 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.740 Waiting for block devices as requested 00:04:32.740 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.740 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:32.998 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:32.998 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:32.998 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:32.998 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:33.257 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:33.257 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:33.257 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:33.257 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.517 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:33.517 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:33.517 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:33.777 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:33.777 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:33.777 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:33.777 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:34.036 02:04:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:34.036 02:04:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:34.036 02:04:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:34.036 02:04:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:34.036 02:04:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:34.036 02:04:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:34.036 02:04:02 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:34.036 02:04:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:34.036 02:04:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:34.036 02:04:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:34.036 02:04:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:34.036 02:04:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:34.036 02:04:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:34.036 02:04:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:34.036 02:04:02 -- common/autotest_common.sh@1557 -- # continue 00:04:34.036 02:04:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:34.036 02:04:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.036 02:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:34.036 02:04:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:34.036 02:04:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.036 02:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:34.036 02:04:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.416 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.416 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.416 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.354 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:36.354 02:04:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:36.354 02:04:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.354 02:04:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.354 02:04:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:36.354 02:04:04 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:36.354 02:04:04 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.354 02:04:04 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:36.354 02:04:04 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:36.354 02:04:04 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:36.354 02:04:04 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.354 02:04:04 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.354 02:04:04 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.354 02:04:04 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:36.354 02:04:04 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:36.354 02:04:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:36.354 02:04:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:36.354 02:04:04 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.354 02:04:04 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:36.354 02:04:04 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:36.354 02:04:04 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:36.354 02:04:04 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:36.354 02:04:04 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:36.354 02:04:04 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:36.354 02:04:04 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=896612 00:04:36.354 02:04:04 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.354 02:04:04 -- common/autotest_common.sh@1598 -- # waitforlisten 896612 00:04:36.354 02:04:04 -- common/autotest_common.sh@831 -- # '[' -z 896612 ']' 00:04:36.354 02:04:04 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.354 02:04:04 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.354 02:04:04 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.354 02:04:04 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.354 02:04:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.613 [2024-07-27 02:04:04.536359] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:04:36.613 [2024-07-27 02:04:04.536451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896612 ] 00:04:36.613 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.613 [2024-07-27 02:04:04.568913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:36.613 [2024-07-27 02:04:04.600209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.613 [2024-07-27 02:04:04.692048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.871 02:04:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.871 02:04:04 -- common/autotest_common.sh@864 -- # return 0 00:04:36.872 02:04:04 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:36.872 02:04:04 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:36.872 02:04:04 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:40.164 nvme0n1 00:04:40.164 02:04:08 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:40.164 [2024-07-27 02:04:08.242366] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:40.164 [2024-07-27 02:04:08.242414] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:40.164 request: 00:04:40.164 { 00:04:40.164 "nvme_ctrlr_name": "nvme0", 00:04:40.164 "password": "test", 00:04:40.164 "method": "bdev_nvme_opal_revert", 00:04:40.164 "req_id": 1 00:04:40.164 } 00:04:40.164 Got JSON-RPC error response 00:04:40.164 response: 00:04:40.164 { 00:04:40.164 "code": -32603, 00:04:40.164 "message": "Internal error" 00:04:40.164 } 00:04:40.164 02:04:08 -- common/autotest_common.sh@1604 -- # true 00:04:40.164 02:04:08 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:40.164 02:04:08 -- common/autotest_common.sh@1608 -- # killprocess 896612 00:04:40.164 02:04:08 -- common/autotest_common.sh@950 -- # '[' -z 896612 ']' 00:04:40.164 02:04:08 -- common/autotest_common.sh@954 -- # kill -0 896612 00:04:40.164 02:04:08 -- common/autotest_common.sh@955 -- # uname 00:04:40.164 02:04:08 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.164 02:04:08 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896612 00:04:40.164 02:04:08 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.164 02:04:08 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.164 02:04:08 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896612' 00:04:40.164 killing process with pid 896612 00:04:40.164 02:04:08 -- common/autotest_common.sh@969 -- # kill 896612 00:04:40.164 02:04:08 -- common/autotest_common.sh@974 -- # wait 896612 00:04:42.127 02:04:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.127 02:04:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.127 02:04:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.127 02:04:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.127 02:04:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.127 02:04:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.127 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.127 02:04:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:42.127 02:04:10 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.127 02:04:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.127 02:04:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.127 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.127 ************************************ 00:04:42.127 START TEST env 00:04:42.127 ************************************ 00:04:42.127 02:04:10 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:42.127 * Looking for test storage... 00:04:42.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:42.127 02:04:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.127 02:04:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.127 02:04:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.127 02:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.127 ************************************ 00:04:42.127 START TEST env_memory 00:04:42.127 ************************************ 00:04:42.127 02:04:10 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.127 00:04:42.127 00:04:42.127 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.127 http://cunit.sourceforge.net/ 00:04:42.127 00:04:42.127 00:04:42.127 Suite: memory 00:04:42.127 Test: alloc and free memory map ...[2024-07-27 02:04:10.190825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.127 passed 00:04:42.127 Test: mem map translation ...[2024-07-27 02:04:10.211413] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.127 [2024-07-27 02:04:10.211434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.127 [2024-07-27 02:04:10.211491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.127 [2024-07-27 02:04:10.211503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.127 passed 00:04:42.127 Test: mem map registration ...[2024-07-27 02:04:10.253660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.127 [2024-07-27 02:04:10.253681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.127 passed 00:04:42.388 Test: mem map adjacent registrations ...passed 00:04:42.388 00:04:42.388 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.388 suites 1 1 n/a 0 0 00:04:42.388 tests 4 4 4 0 0 00:04:42.388 asserts 152 152 152 0 n/a 00:04:42.388 00:04:42.388 Elapsed time = 0.147 seconds 00:04:42.388 00:04:42.388 real 0m0.155s 00:04:42.388 user 0m0.147s 00:04:42.388 sys 0m0.008s 00:04:42.388 02:04:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.388 02:04:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.388 ************************************ 00:04:42.388 END TEST env_memory 00:04:42.388 ************************************ 00:04:42.388 02:04:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.388 02:04:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.388 02:04:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.388 02:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.388 ************************************ 00:04:42.388 START TEST env_vtophys 00:04:42.388 ************************************ 00:04:42.388 02:04:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.388 EAL: lib.eal log level changed from notice to debug 00:04:42.388 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.388 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.388 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.388 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.388 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.388 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.388 EAL: Detected lcore 6 as core 8 on socket 0 00:04:42.388 EAL: Detected lcore 7 as core 9 on socket 0 00:04:42.388 EAL: Detected lcore 8 as core 10 on socket 0 00:04:42.388 EAL: Detected lcore 9 as core 11 on socket 0 00:04:42.388 EAL: Detected lcore 10 as core 12 on socket 0 00:04:42.388 EAL: Detected lcore 11 as core 13 on socket 0 00:04:42.388 EAL: Detected lcore 12 as core 0 on socket 1 00:04:42.388 EAL: Detected lcore 13 as core 1 on socket 1 00:04:42.388 EAL: Detected lcore 14 as core 2 on socket 1 00:04:42.388 EAL: Detected lcore 15 as core 3 on socket 1 00:04:42.388 EAL: Detected lcore 16 as core 4 on socket 1 00:04:42.388 EAL: Detected lcore 17 as core 5 on socket 1 00:04:42.388 EAL: Detected lcore 18 as core 8 on socket 1 00:04:42.388 EAL: Detected lcore 19 as core 9 on socket 1 00:04:42.388 EAL: Detected lcore 20 as core 10 on socket 1 00:04:42.388 EAL: Detected lcore 21 as core 11 on socket 1 00:04:42.388 EAL: Detected lcore 22 as core 12 on socket 1 00:04:42.388 EAL: Detected lcore 23 as core 13 on socket 1 00:04:42.388 EAL: Detected lcore 24 as core 0 on socket 0 00:04:42.388 EAL: Detected lcore 25 as core 1 on socket 0 00:04:42.388 EAL: Detected lcore 26 as core 2 on socket 0 00:04:42.388 EAL: Detected lcore 27 as core 3 on socket 0 00:04:42.388 EAL: Detected lcore 28 as core 4 on socket 0 00:04:42.388 EAL: Detected lcore 29 as core 5 on socket 0 00:04:42.388 EAL: Detected lcore 30 as core 8 on socket 0 00:04:42.388 EAL: Detected lcore 31 as core 9 on socket 0 00:04:42.388 EAL: Detected lcore 32 as core 10 on socket 0 00:04:42.388 EAL: Detected lcore 33 as core 11 on socket 0 00:04:42.388 EAL: Detected lcore 34 as core 12 on socket 0 00:04:42.388 EAL: Detected lcore 35 as core 13 on socket 0 00:04:42.388 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.388 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.388 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.388 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.388 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.388 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.388 EAL: Detected lcore 42 as core 8 on socket 1 00:04:42.388 EAL: Detected lcore 43 as core 9 on socket 1 00:04:42.388 EAL: Detected lcore 44 as core 10 on socket 1 00:04:42.388 EAL: Detected lcore 45 as core 11 on socket 1 00:04:42.388 EAL: Detected lcore 46 as core 12 on socket 1 00:04:42.388 EAL: Detected lcore 47 as core 13 on socket 1 00:04:42.388 EAL: Maximum logical cores by configuration: 128 00:04:42.388 EAL: Detected CPU lcores: 48 00:04:42.388 EAL: Detected NUMA nodes: 2 00:04:42.388 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:42.388 EAL: Detected shared linkage of DPDK 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:42.388 EAL: Registered [vdev] bus. 00:04:42.388 EAL: bus.vdev log level changed from disabled to notice 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:42.388 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:42.388 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:42.388 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:42.388 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.388 EAL: No shared files mode enabled, IPC is disabled 00:04:42.388 EAL: Bus pci wants IOVA as 'DC' 00:04:42.388 EAL: Bus vdev wants IOVA as 'DC' 00:04:42.388 EAL: Buses did not request a specific IOVA mode. 00:04:42.388 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.388 EAL: Selected IOVA mode 'VA' 00:04:42.388 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.388 EAL: Probing VFIO support... 00:04:42.388 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.388 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.388 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.388 EAL: VFIO support initialized 00:04:42.388 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.388 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.388 EAL: Setting up physically contiguous memory... 00:04:42.388 EAL: Setting maximum number of open files to 524288 00:04:42.388 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.388 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.388 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.388 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.388 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.388 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.388 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.388 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.388 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.388 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.389 EAL: Hugepages will be freed exactly as allocated. 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: TSC frequency is ~2700000 KHz 00:04:42.389 EAL: Main lcore 0 is ready (tid=7fe1e6636a00;cpuset=[0]) 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 0 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.389 00:04:42.389 00:04:42.389 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.389 http://cunit.sourceforge.net/ 00:04:42.389 00:04:42.389 00:04:42.389 Suite: components_suite 00:04:42.389 Test: vtophys_malloc_test ...passed 00:04:42.389 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.389 EAL: Trying to obtain current memory policy. 00:04:42.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.389 EAL: Restoring previous memory policy: 4 00:04:42.389 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.389 EAL: request: mp_malloc_sync 00:04:42.389 EAL: No shared files mode enabled, IPC is disabled 00:04:42.389 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.649 EAL: request: mp_malloc_sync 00:04:42.649 EAL: No shared files mode enabled, IPC is disabled 00:04:42.649 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.649 EAL: Trying to obtain current memory policy. 00:04:42.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.649 EAL: Restoring previous memory policy: 4 00:04:42.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.649 EAL: request: mp_malloc_sync 00:04:42.649 EAL: No shared files mode enabled, IPC is disabled 00:04:42.649 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.649 EAL: request: mp_malloc_sync 00:04:42.649 EAL: No shared files mode enabled, IPC is disabled 00:04:42.649 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.649 EAL: Trying to obtain current memory policy. 00:04:42.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.909 EAL: Restoring previous memory policy: 4 00:04:42.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.909 EAL: request: mp_malloc_sync 00:04:42.909 EAL: No shared files mode enabled, IPC is disabled 00:04:42.909 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.170 EAL: request: mp_malloc_sync 00:04:43.170 EAL: No shared files mode enabled, IPC is disabled 00:04:43.170 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.170 EAL: Trying to obtain current memory policy. 00:04:43.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.430 EAL: Restoring previous memory policy: 4 00:04:43.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.430 EAL: request: mp_malloc_sync 00:04:43.430 EAL: No shared files mode enabled, IPC is disabled 00:04:43.430 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.689 EAL: request: mp_malloc_sync 00:04:43.689 EAL: No shared files mode enabled, IPC is disabled 00:04:43.689 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.689 passed 00:04:43.689 00:04:43.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.689 suites 1 1 n/a 0 0 00:04:43.689 tests 2 2 2 0 0 00:04:43.689 asserts 497 497 497 0 n/a 00:04:43.689 00:04:43.689 Elapsed time = 1.375 seconds 00:04:43.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.689 EAL: request: mp_malloc_sync 00:04:43.689 EAL: No shared files mode enabled, IPC is disabled 00:04:43.689 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.689 EAL: No shared files mode enabled, IPC is disabled 00:04:43.689 EAL: No shared files mode enabled, IPC is disabled 00:04:43.689 EAL: No shared files mode enabled, IPC is disabled 00:04:43.689 00:04:43.689 real 0m1.489s 00:04:43.689 user 0m0.864s 00:04:43.689 sys 0m0.592s 00:04:43.689 02:04:11 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.689 02:04:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:43.689 ************************************ 00:04:43.689 END TEST env_vtophys 00:04:43.689 ************************************ 00:04:43.949 02:04:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.949 02:04:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.949 02:04:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.949 02:04:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.949 ************************************ 00:04:43.949 START TEST env_pci 00:04:43.949 ************************************ 00:04:43.949 02:04:11 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.949 00:04:43.949 00:04:43.949 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.949 http://cunit.sourceforge.net/ 00:04:43.949 00:04:43.949 00:04:43.949 Suite: pci 00:04:43.949 Test: pci_hook ...[2024-07-27 02:04:11.902792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 897513 has claimed it 00:04:43.949 EAL: Cannot find device (10000:00:01.0) 00:04:43.949 EAL: Failed to attach device on primary process 00:04:43.949 passed 00:04:43.949 00:04:43.949 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.949 suites 1 1 n/a 0 0 00:04:43.949 tests 1 1 1 0 0 00:04:43.949 asserts 25 25 25 0 n/a 00:04:43.949 00:04:43.949 Elapsed time = 0.021 seconds 00:04:43.949 00:04:43.949 real 0m0.033s 00:04:43.949 user 0m0.009s 00:04:43.949 sys 0m0.023s 00:04:43.949 02:04:11 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.949 02:04:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:43.949 ************************************ 00:04:43.949 END TEST env_pci 00:04:43.949 ************************************ 00:04:43.949 02:04:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.949 02:04:11 env -- env/env.sh@15 -- # uname 00:04:43.949 02:04:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.949 02:04:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.949 02:04:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.949 02:04:11 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:43.949 02:04:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.949 02:04:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.949 ************************************ 00:04:43.949 START TEST env_dpdk_post_init 00:04:43.949 ************************************ 00:04:43.949 02:04:11 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.949 EAL: Detected CPU lcores: 48 00:04:43.949 EAL: Detected NUMA nodes: 2 00:04:43.949 EAL: Detected shared linkage of DPDK 00:04:43.949 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.949 EAL: Selected IOVA mode 'VA' 00:04:43.949 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.950 EAL: VFIO support initialized 00:04:43.950 EAL: Using IOMMU type 1 (Type 1) 00:04:49.234 Starting DPDK initialization... 00:04:49.234 Starting SPDK post initialization... 00:04:49.234 SPDK NVMe probe 00:04:49.234 Attaching to 0000:88:00.0 00:04:49.234 Attached to 0000:88:00.0 00:04:49.234 Cleaning up... 00:04:49.234 00:04:49.234 real 0m4.384s 00:04:49.234 user 0m3.243s 00:04:49.234 sys 0m0.195s 00:04:49.234 02:04:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.234 02:04:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.234 ************************************ 00:04:49.234 END TEST env_dpdk_post_init 00:04:49.234 ************************************ 00:04:49.234 02:04:16 env -- env/env.sh@26 -- # uname 00:04:49.234 02:04:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.234 02:04:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.234 02:04:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.234 02:04:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.234 02:04:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.234 ************************************ 00:04:49.234 START TEST env_mem_callbacks 00:04:49.234 ************************************ 00:04:49.234 02:04:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.234 EAL: Detected CPU lcores: 48 00:04:49.234 EAL: Detected NUMA nodes: 2 00:04:49.234 EAL: Detected shared linkage of DPDK 00:04:49.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.234 EAL: Selected IOVA mode 'VA' 00:04:49.234 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.234 EAL: VFIO support initialized 00:04:49.234 00:04:49.234 00:04:49.234 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.234 http://cunit.sourceforge.net/ 00:04:49.234 00:04:49.234 00:04:49.234 Suite: memory 00:04:49.234 Test: test ... 00:04:49.234 register 0x200000200000 2097152 00:04:49.234 malloc 3145728 00:04:49.234 register 0x200000400000 4194304 00:04:49.234 buf 0x200000500000 len 3145728 PASSED 00:04:49.234 malloc 64 00:04:49.234 buf 0x2000004fff40 len 64 PASSED 00:04:49.234 malloc 4194304 00:04:49.234 register 0x200000800000 6291456 00:04:49.234 buf 0x200000a00000 len 4194304 PASSED 00:04:49.234 free 0x200000500000 3145728 00:04:49.234 free 0x2000004fff40 64 00:04:49.234 unregister 0x200000400000 4194304 PASSED 00:04:49.234 free 0x200000a00000 4194304 00:04:49.234 unregister 0x200000800000 6291456 PASSED 00:04:49.234 malloc 8388608 00:04:49.234 register 0x200000400000 10485760 00:04:49.234 buf 0x200000600000 len 8388608 PASSED 00:04:49.234 free 0x200000600000 8388608 00:04:49.234 unregister 0x200000400000 10485760 PASSED 00:04:49.234 passed 00:04:49.234 00:04:49.234 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.235 suites 1 1 n/a 0 0 00:04:49.235 tests 1 1 1 0 0 00:04:49.235 asserts 15 15 15 0 n/a 00:04:49.235 00:04:49.235 Elapsed time = 0.005 seconds 00:04:49.235 00:04:49.235 real 0m0.050s 00:04:49.235 user 0m0.014s 00:04:49.235 sys 0m0.035s 00:04:49.235 02:04:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.235 02:04:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 ************************************ 00:04:49.235 END TEST env_mem_callbacks 00:04:49.235 ************************************ 00:04:49.235 00:04:49.235 real 0m6.399s 00:04:49.235 user 0m4.391s 00:04:49.235 sys 0m1.048s 00:04:49.235 02:04:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.235 02:04:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 ************************************ 00:04:49.235 END TEST env 00:04:49.235 ************************************ 00:04:49.235 02:04:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.235 02:04:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.235 02:04:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.235 02:04:16 -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 ************************************ 00:04:49.235 START TEST rpc 00:04:49.235 ************************************ 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.235 * Looking for test storage... 00:04:49.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.235 02:04:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=898163 00:04:49.235 02:04:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.235 02:04:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.235 02:04:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 898163 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 898163 ']' 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.235 02:04:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 [2024-07-27 02:04:16.628427] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:04:49.235 [2024-07-27 02:04:16.628503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898163 ] 00:04:49.235 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.235 [2024-07-27 02:04:16.659522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.235 [2024-07-27 02:04:16.686005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.235 [2024-07-27 02:04:16.770737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.235 [2024-07-27 02:04:16.770794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 898163' to capture a snapshot of events at runtime. 00:04:49.235 [2024-07-27 02:04:16.770821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.235 [2024-07-27 02:04:16.770833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.235 [2024-07-27 02:04:16.770842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid898163 for offline analysis/debug. 00:04:49.235 [2024-07-27 02:04:16.770868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.235 02:04:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.235 02:04:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.235 02:04:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.235 02:04:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.235 02:04:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.235 02:04:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.235 02:04:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.235 02:04:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.235 02:04:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 ************************************ 00:04:49.235 START TEST rpc_integrity 00:04:49.235 ************************************ 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.235 { 00:04:49.235 "name": "Malloc0", 00:04:49.235 "aliases": [ 00:04:49.235 "8cb2a6d1-821d-4c30-bd64-3b13c29dd697" 00:04:49.235 ], 00:04:49.235 "product_name": "Malloc disk", 00:04:49.235 "block_size": 512, 00:04:49.235 "num_blocks": 16384, 00:04:49.235 "uuid": "8cb2a6d1-821d-4c30-bd64-3b13c29dd697", 00:04:49.235 "assigned_rate_limits": { 00:04:49.235 "rw_ios_per_sec": 0, 00:04:49.235 "rw_mbytes_per_sec": 0, 00:04:49.235 "r_mbytes_per_sec": 0, 00:04:49.235 "w_mbytes_per_sec": 0 00:04:49.235 }, 00:04:49.235 "claimed": false, 00:04:49.235 "zoned": false, 00:04:49.235 "supported_io_types": { 00:04:49.235 "read": true, 00:04:49.235 "write": true, 00:04:49.235 "unmap": true, 00:04:49.235 "flush": true, 00:04:49.235 "reset": true, 00:04:49.235 "nvme_admin": false, 00:04:49.235 "nvme_io": false, 00:04:49.235 "nvme_io_md": false, 00:04:49.235 "write_zeroes": true, 00:04:49.235 "zcopy": true, 00:04:49.235 "get_zone_info": false, 00:04:49.235 "zone_management": false, 00:04:49.235 "zone_append": false, 00:04:49.235 "compare": false, 00:04:49.235 "compare_and_write": false, 00:04:49.235 "abort": true, 00:04:49.235 "seek_hole": false, 00:04:49.235 "seek_data": false, 00:04:49.235 "copy": true, 00:04:49.235 "nvme_iov_md": false 00:04:49.235 }, 00:04:49.235 "memory_domains": [ 00:04:49.235 { 00:04:49.235 "dma_device_id": "system", 00:04:49.235 "dma_device_type": 1 00:04:49.235 }, 00:04:49.235 { 00:04:49.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.235 "dma_device_type": 2 00:04:49.235 } 00:04:49.235 ], 00:04:49.235 "driver_specific": {} 00:04:49.235 } 00:04:49.235 ]' 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 [2024-07-27 02:04:17.164924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.235 [2024-07-27 02:04:17.164972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.235 [2024-07-27 02:04:17.164997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89e7f0 00:04:49.235 [2024-07-27 02:04:17.165013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.235 [2024-07-27 02:04:17.166518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.235 [2024-07-27 02:04:17.166546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.235 Passthru0 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.235 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.235 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.235 { 00:04:49.235 "name": "Malloc0", 00:04:49.235 "aliases": [ 00:04:49.235 "8cb2a6d1-821d-4c30-bd64-3b13c29dd697" 00:04:49.235 ], 00:04:49.235 "product_name": "Malloc disk", 00:04:49.235 "block_size": 512, 00:04:49.235 "num_blocks": 16384, 00:04:49.235 "uuid": "8cb2a6d1-821d-4c30-bd64-3b13c29dd697", 00:04:49.235 "assigned_rate_limits": { 00:04:49.235 "rw_ios_per_sec": 0, 00:04:49.235 "rw_mbytes_per_sec": 0, 00:04:49.235 "r_mbytes_per_sec": 0, 00:04:49.236 "w_mbytes_per_sec": 0 00:04:49.236 }, 00:04:49.236 "claimed": true, 00:04:49.236 "claim_type": "exclusive_write", 00:04:49.236 "zoned": false, 00:04:49.236 "supported_io_types": { 00:04:49.236 "read": true, 00:04:49.236 "write": true, 00:04:49.236 "unmap": true, 00:04:49.236 "flush": true, 00:04:49.236 "reset": true, 00:04:49.236 "nvme_admin": false, 00:04:49.236 "nvme_io": false, 00:04:49.236 "nvme_io_md": false, 00:04:49.236 "write_zeroes": true, 00:04:49.236 "zcopy": true, 00:04:49.236 "get_zone_info": false, 00:04:49.236 "zone_management": false, 00:04:49.236 "zone_append": false, 00:04:49.236 "compare": false, 00:04:49.236 "compare_and_write": false, 00:04:49.236 "abort": true, 00:04:49.236 "seek_hole": false, 00:04:49.236 "seek_data": false, 00:04:49.236 "copy": true, 00:04:49.236 "nvme_iov_md": false 00:04:49.236 }, 00:04:49.236 "memory_domains": [ 00:04:49.236 { 00:04:49.236 "dma_device_id": "system", 00:04:49.236 "dma_device_type": 1 00:04:49.236 }, 00:04:49.236 { 00:04:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.236 "dma_device_type": 2 00:04:49.236 } 00:04:49.236 ], 00:04:49.236 "driver_specific": {} 00:04:49.236 }, 00:04:49.236 { 00:04:49.236 "name": "Passthru0", 00:04:49.236 "aliases": [ 00:04:49.236 "dd0ee40a-bae4-5714-9cb6-6b540ba2fa70" 00:04:49.236 ], 00:04:49.236 "product_name": "passthru", 00:04:49.236 "block_size": 512, 00:04:49.236 "num_blocks": 16384, 00:04:49.236 "uuid": "dd0ee40a-bae4-5714-9cb6-6b540ba2fa70", 00:04:49.236 "assigned_rate_limits": { 00:04:49.236 "rw_ios_per_sec": 0, 00:04:49.236 "rw_mbytes_per_sec": 0, 00:04:49.236 "r_mbytes_per_sec": 0, 00:04:49.236 "w_mbytes_per_sec": 0 00:04:49.236 }, 00:04:49.236 "claimed": false, 00:04:49.236 "zoned": false, 00:04:49.236 "supported_io_types": { 00:04:49.236 "read": true, 00:04:49.236 "write": true, 00:04:49.236 "unmap": true, 00:04:49.236 "flush": true, 00:04:49.236 "reset": true, 00:04:49.236 "nvme_admin": false, 00:04:49.236 "nvme_io": false, 00:04:49.236 "nvme_io_md": false, 00:04:49.236 "write_zeroes": true, 00:04:49.236 "zcopy": true, 00:04:49.236 "get_zone_info": false, 00:04:49.236 "zone_management": false, 00:04:49.236 "zone_append": false, 00:04:49.236 "compare": false, 00:04:49.236 "compare_and_write": false, 00:04:49.236 "abort": true, 00:04:49.236 "seek_hole": false, 00:04:49.236 "seek_data": false, 00:04:49.236 "copy": true, 00:04:49.236 "nvme_iov_md": false 00:04:49.236 }, 00:04:49.236 "memory_domains": [ 00:04:49.236 { 00:04:49.236 "dma_device_id": "system", 00:04:49.236 "dma_device_type": 1 00:04:49.236 }, 00:04:49.236 { 00:04:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.236 "dma_device_type": 2 00:04:49.236 } 00:04:49.236 ], 00:04:49.236 "driver_specific": { 00:04:49.236 "passthru": { 00:04:49.236 "name": "Passthru0", 00:04:49.236 "base_bdev_name": "Malloc0" 00:04:49.236 } 00:04:49.236 } 00:04:49.236 } 00:04:49.236 ]' 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.236 02:04:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.236 00:04:49.236 real 0m0.236s 00:04:49.236 user 0m0.150s 00:04:49.236 sys 0m0.024s 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 ************************************ 00:04:49.236 END TEST rpc_integrity 00:04:49.236 ************************************ 00:04:49.236 02:04:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.236 02:04:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.236 02:04:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.236 02:04:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 ************************************ 00:04:49.236 START TEST rpc_plugins 00:04:49.236 ************************************ 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.236 { 00:04:49.236 "name": "Malloc1", 00:04:49.236 "aliases": [ 00:04:49.236 "b7db1c62-6ab5-4e6c-83c7-57959e428d61" 00:04:49.236 ], 00:04:49.236 "product_name": "Malloc disk", 00:04:49.236 "block_size": 4096, 00:04:49.236 "num_blocks": 256, 00:04:49.236 "uuid": "b7db1c62-6ab5-4e6c-83c7-57959e428d61", 00:04:49.236 "assigned_rate_limits": { 00:04:49.236 "rw_ios_per_sec": 0, 00:04:49.236 "rw_mbytes_per_sec": 0, 00:04:49.236 "r_mbytes_per_sec": 0, 00:04:49.236 "w_mbytes_per_sec": 0 00:04:49.236 }, 00:04:49.236 "claimed": false, 00:04:49.236 "zoned": false, 00:04:49.236 "supported_io_types": { 00:04:49.236 "read": true, 00:04:49.236 "write": true, 00:04:49.236 "unmap": true, 00:04:49.236 "flush": true, 00:04:49.236 "reset": true, 00:04:49.236 "nvme_admin": false, 00:04:49.236 "nvme_io": false, 00:04:49.236 "nvme_io_md": false, 00:04:49.236 "write_zeroes": true, 00:04:49.236 "zcopy": true, 00:04:49.236 "get_zone_info": false, 00:04:49.236 "zone_management": false, 00:04:49.236 "zone_append": false, 00:04:49.236 "compare": false, 00:04:49.236 "compare_and_write": false, 00:04:49.236 "abort": true, 00:04:49.236 "seek_hole": false, 00:04:49.236 "seek_data": false, 00:04:49.236 "copy": true, 00:04:49.236 "nvme_iov_md": false 00:04:49.236 }, 00:04:49.236 "memory_domains": [ 00:04:49.236 { 00:04:49.236 "dma_device_id": "system", 00:04:49.236 "dma_device_type": 1 00:04:49.236 }, 00:04:49.236 { 00:04:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.236 "dma_device_type": 2 00:04:49.236 } 00:04:49.236 ], 00:04:49.236 "driver_specific": {} 00:04:49.236 } 00:04:49.236 ]' 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.236 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.236 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.497 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.497 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.497 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:49.497 02:04:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.497 00:04:49.497 real 0m0.109s 00:04:49.497 user 0m0.071s 00:04:49.497 sys 0m0.012s 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.497 02:04:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.497 ************************************ 00:04:49.497 END TEST rpc_plugins 00:04:49.497 ************************************ 00:04:49.497 02:04:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.497 02:04:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.497 02:04:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.497 02:04:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.497 ************************************ 00:04:49.497 START TEST rpc_trace_cmd_test 00:04:49.497 ************************************ 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:49.497 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid898163", 00:04:49.497 "tpoint_group_mask": "0x8", 00:04:49.497 "iscsi_conn": { 00:04:49.497 "mask": "0x2", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "scsi": { 00:04:49.497 "mask": "0x4", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "bdev": { 00:04:49.497 "mask": "0x8", 00:04:49.497 "tpoint_mask": "0xffffffffffffffff" 00:04:49.497 }, 00:04:49.497 "nvmf_rdma": { 00:04:49.497 "mask": "0x10", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "nvmf_tcp": { 00:04:49.497 "mask": "0x20", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "ftl": { 00:04:49.497 "mask": "0x40", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "blobfs": { 00:04:49.497 "mask": "0x80", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "dsa": { 00:04:49.497 "mask": "0x200", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "thread": { 00:04:49.497 "mask": "0x400", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "nvme_pcie": { 00:04:49.497 "mask": "0x800", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "iaa": { 00:04:49.497 "mask": "0x1000", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "nvme_tcp": { 00:04:49.497 "mask": "0x2000", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "bdev_nvme": { 00:04:49.497 "mask": "0x4000", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 }, 00:04:49.497 "sock": { 00:04:49.497 "mask": "0x8000", 00:04:49.497 "tpoint_mask": "0x0" 00:04:49.497 } 00:04:49.497 }' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.497 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.757 02:04:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.757 00:04:49.757 real 0m0.201s 00:04:49.757 user 0m0.179s 00:04:49.757 sys 0m0.014s 00:04:49.757 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 ************************************ 00:04:49.757 END TEST rpc_trace_cmd_test 00:04:49.757 ************************************ 00:04:49.757 02:04:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:49.757 02:04:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.757 02:04:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.757 02:04:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.757 02:04:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.757 02:04:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 ************************************ 00:04:49.757 START TEST rpc_daemon_integrity 00:04:49.757 ************************************ 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.757 { 00:04:49.757 "name": "Malloc2", 00:04:49.757 "aliases": [ 00:04:49.757 "482b6552-9575-4ad4-a1c9-e998064975a6" 00:04:49.757 ], 00:04:49.757 "product_name": "Malloc disk", 00:04:49.757 "block_size": 512, 00:04:49.757 "num_blocks": 16384, 00:04:49.757 "uuid": "482b6552-9575-4ad4-a1c9-e998064975a6", 00:04:49.757 "assigned_rate_limits": { 00:04:49.757 "rw_ios_per_sec": 0, 00:04:49.757 "rw_mbytes_per_sec": 0, 00:04:49.757 "r_mbytes_per_sec": 0, 00:04:49.757 "w_mbytes_per_sec": 0 00:04:49.757 }, 00:04:49.757 "claimed": false, 00:04:49.757 "zoned": false, 00:04:49.757 "supported_io_types": { 00:04:49.757 "read": true, 00:04:49.757 "write": true, 00:04:49.757 "unmap": true, 00:04:49.757 "flush": true, 00:04:49.757 "reset": true, 00:04:49.757 "nvme_admin": false, 00:04:49.757 "nvme_io": false, 00:04:49.757 "nvme_io_md": false, 00:04:49.757 "write_zeroes": true, 00:04:49.757 "zcopy": true, 00:04:49.757 "get_zone_info": false, 00:04:49.757 "zone_management": false, 00:04:49.757 "zone_append": false, 00:04:49.757 "compare": false, 00:04:49.757 "compare_and_write": false, 00:04:49.757 "abort": true, 00:04:49.757 "seek_hole": false, 00:04:49.757 "seek_data": false, 00:04:49.757 "copy": true, 00:04:49.757 "nvme_iov_md": false 00:04:49.757 }, 00:04:49.757 "memory_domains": [ 00:04:49.757 { 00:04:49.757 "dma_device_id": "system", 00:04:49.757 "dma_device_type": 1 00:04:49.757 }, 00:04:49.757 { 00:04:49.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.757 "dma_device_type": 2 00:04:49.757 } 00:04:49.757 ], 00:04:49.757 "driver_specific": {} 00:04:49.757 } 00:04:49.757 ]' 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 [2024-07-27 02:04:17.839033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:49.757 [2024-07-27 02:04:17.839088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.757 [2024-07-27 02:04:17.839130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa42490 00:04:49.757 [2024-07-27 02:04:17.839145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.757 [2024-07-27 02:04:17.840481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.757 [2024-07-27 02:04:17.840508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.757 Passthru0 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.757 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.757 { 00:04:49.757 "name": "Malloc2", 00:04:49.757 "aliases": [ 00:04:49.757 "482b6552-9575-4ad4-a1c9-e998064975a6" 00:04:49.757 ], 00:04:49.757 "product_name": "Malloc disk", 00:04:49.757 "block_size": 512, 00:04:49.757 "num_blocks": 16384, 00:04:49.757 "uuid": "482b6552-9575-4ad4-a1c9-e998064975a6", 00:04:49.757 "assigned_rate_limits": { 00:04:49.757 "rw_ios_per_sec": 0, 00:04:49.757 "rw_mbytes_per_sec": 0, 00:04:49.757 "r_mbytes_per_sec": 0, 00:04:49.757 "w_mbytes_per_sec": 0 00:04:49.757 }, 00:04:49.757 "claimed": true, 00:04:49.757 "claim_type": "exclusive_write", 00:04:49.757 "zoned": false, 00:04:49.757 "supported_io_types": { 00:04:49.757 "read": true, 00:04:49.757 "write": true, 00:04:49.757 "unmap": true, 00:04:49.757 "flush": true, 00:04:49.757 "reset": true, 00:04:49.757 "nvme_admin": false, 00:04:49.757 "nvme_io": false, 00:04:49.757 "nvme_io_md": false, 00:04:49.757 "write_zeroes": true, 00:04:49.757 "zcopy": true, 00:04:49.757 "get_zone_info": false, 00:04:49.757 "zone_management": false, 00:04:49.757 "zone_append": false, 00:04:49.757 "compare": false, 00:04:49.757 "compare_and_write": false, 00:04:49.757 "abort": true, 00:04:49.757 "seek_hole": false, 00:04:49.757 "seek_data": false, 00:04:49.757 "copy": true, 00:04:49.757 "nvme_iov_md": false 00:04:49.757 }, 00:04:49.757 "memory_domains": [ 00:04:49.757 { 00:04:49.757 "dma_device_id": "system", 00:04:49.757 "dma_device_type": 1 00:04:49.757 }, 00:04:49.757 { 00:04:49.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.757 "dma_device_type": 2 00:04:49.757 } 00:04:49.757 ], 00:04:49.757 "driver_specific": {} 00:04:49.757 }, 00:04:49.757 { 00:04:49.757 "name": "Passthru0", 00:04:49.757 "aliases": [ 00:04:49.757 "381e9ac2-85c9-53ad-b8f1-2f9b28358e7a" 00:04:49.757 ], 00:04:49.757 "product_name": "passthru", 00:04:49.757 "block_size": 512, 00:04:49.757 "num_blocks": 16384, 00:04:49.757 "uuid": "381e9ac2-85c9-53ad-b8f1-2f9b28358e7a", 00:04:49.757 "assigned_rate_limits": { 00:04:49.757 "rw_ios_per_sec": 0, 00:04:49.757 "rw_mbytes_per_sec": 0, 00:04:49.757 "r_mbytes_per_sec": 0, 00:04:49.757 "w_mbytes_per_sec": 0 00:04:49.757 }, 00:04:49.757 "claimed": false, 00:04:49.757 "zoned": false, 00:04:49.757 "supported_io_types": { 00:04:49.757 "read": true, 00:04:49.757 "write": true, 00:04:49.757 "unmap": true, 00:04:49.757 "flush": true, 00:04:49.757 "reset": true, 00:04:49.757 "nvme_admin": false, 00:04:49.757 "nvme_io": false, 00:04:49.757 "nvme_io_md": false, 00:04:49.757 "write_zeroes": true, 00:04:49.757 "zcopy": true, 00:04:49.757 "get_zone_info": false, 00:04:49.757 "zone_management": false, 00:04:49.757 "zone_append": false, 00:04:49.757 "compare": false, 00:04:49.757 "compare_and_write": false, 00:04:49.757 "abort": true, 00:04:49.757 "seek_hole": false, 00:04:49.757 "seek_data": false, 00:04:49.757 "copy": true, 00:04:49.757 "nvme_iov_md": false 00:04:49.757 }, 00:04:49.757 "memory_domains": [ 00:04:49.757 { 00:04:49.757 "dma_device_id": "system", 00:04:49.757 "dma_device_type": 1 00:04:49.757 }, 00:04:49.757 { 00:04:49.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.757 "dma_device_type": 2 00:04:49.757 } 00:04:49.757 ], 00:04:49.757 "driver_specific": { 00:04:49.757 "passthru": { 00:04:49.757 "name": "Passthru0", 00:04:49.758 "base_bdev_name": "Malloc2" 00:04:49.758 } 00:04:49.758 } 00:04:49.758 } 00:04:49.758 ]' 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.758 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.017 00:04:50.017 real 0m0.228s 00:04:50.017 user 0m0.153s 00:04:50.017 sys 0m0.018s 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.017 02:04:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.017 ************************************ 00:04:50.017 END TEST rpc_daemon_integrity 00:04:50.017 ************************************ 00:04:50.017 02:04:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.017 02:04:17 rpc -- rpc/rpc.sh@84 -- # killprocess 898163 00:04:50.017 02:04:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 898163 ']' 00:04:50.017 02:04:17 rpc -- common/autotest_common.sh@954 -- # kill -0 898163 00:04:50.017 02:04:17 rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.017 02:04:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.017 02:04:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898163 00:04:50.017 02:04:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.017 02:04:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.017 02:04:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898163' 00:04:50.017 killing process with pid 898163 00:04:50.017 02:04:18 rpc -- common/autotest_common.sh@969 -- # kill 898163 00:04:50.017 02:04:18 rpc -- common/autotest_common.sh@974 -- # wait 898163 00:04:50.275 00:04:50.275 real 0m1.876s 00:04:50.275 user 0m2.381s 00:04:50.275 sys 0m0.572s 00:04:50.275 02:04:18 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.275 02:04:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.275 ************************************ 00:04:50.275 END TEST rpc 00:04:50.275 ************************************ 00:04:50.275 02:04:18 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:50.275 02:04:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.275 02:04:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.275 02:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:50.532 ************************************ 00:04:50.532 START TEST skip_rpc 00:04:50.532 ************************************ 00:04:50.532 02:04:18 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:50.532 * Looking for test storage... 00:04:50.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.532 02:04:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.532 02:04:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.532 02:04:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.532 02:04:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.532 02:04:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.532 02:04:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.532 ************************************ 00:04:50.532 START TEST skip_rpc 00:04:50.532 ************************************ 00:04:50.532 02:04:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:50.533 02:04:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=898602 00:04:50.533 02:04:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:50.533 02:04:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.533 02:04:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:50.533 [2024-07-27 02:04:18.584439] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:04:50.533 [2024-07-27 02:04:18.584515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898602 ] 00:04:50.533 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.533 [2024-07-27 02:04:18.614269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.533 [2024-07-27 02:04:18.644179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.792 [2024-07-27 02:04:18.734473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 898602 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 898602 ']' 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 898602 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898602 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898602' 00:04:56.076 killing process with pid 898602 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 898602 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 898602 00:04:56.076 00:04:56.076 real 0m5.438s 00:04:56.076 user 0m5.121s 00:04:56.076 sys 0m0.321s 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.076 02:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 ************************************ 00:04:56.076 END TEST skip_rpc 00:04:56.076 ************************************ 00:04:56.076 02:04:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.077 02:04:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.077 02:04:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.077 02:04:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.077 ************************************ 00:04:56.077 START TEST skip_rpc_with_json 00:04:56.077 ************************************ 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=899288 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 899288 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 899288 ']' 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.077 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.077 [2024-07-27 02:04:24.069977] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:04:56.077 [2024-07-27 02:04:24.070078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899288 ] 00:04:56.077 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.077 [2024-07-27 02:04:24.101623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.077 [2024-07-27 02:04:24.126968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.077 [2024-07-27 02:04:24.215422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.337 [2024-07-27 02:04:24.473917] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.337 request: 00:04:56.337 { 00:04:56.337 "trtype": "tcp", 00:04:56.337 "method": "nvmf_get_transports", 00:04:56.337 "req_id": 1 00:04:56.337 } 00:04:56.337 Got JSON-RPC error response 00:04:56.337 response: 00:04:56.337 { 00:04:56.337 "code": -19, 00:04:56.337 "message": "No such device" 00:04:56.337 } 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.337 [2024-07-27 02:04:24.482054] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.337 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.597 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.597 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.597 { 00:04:56.597 "subsystems": [ 00:04:56.597 { 00:04:56.597 "subsystem": "vfio_user_target", 00:04:56.597 "config": null 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "keyring", 00:04:56.597 "config": [] 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "iobuf", 00:04:56.597 "config": [ 00:04:56.597 { 00:04:56.597 "method": "iobuf_set_options", 00:04:56.597 "params": { 00:04:56.597 "small_pool_count": 8192, 00:04:56.597 "large_pool_count": 1024, 00:04:56.597 "small_bufsize": 8192, 00:04:56.597 "large_bufsize": 135168 00:04:56.597 } 00:04:56.597 } 00:04:56.597 ] 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "sock", 00:04:56.597 "config": [ 00:04:56.597 { 00:04:56.597 "method": "sock_set_default_impl", 00:04:56.597 "params": { 00:04:56.597 "impl_name": "posix" 00:04:56.597 } 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "method": "sock_impl_set_options", 00:04:56.597 "params": { 00:04:56.597 "impl_name": "ssl", 00:04:56.597 "recv_buf_size": 4096, 00:04:56.597 "send_buf_size": 4096, 00:04:56.597 "enable_recv_pipe": true, 00:04:56.597 "enable_quickack": false, 00:04:56.597 "enable_placement_id": 0, 00:04:56.597 "enable_zerocopy_send_server": true, 00:04:56.597 "enable_zerocopy_send_client": false, 00:04:56.597 "zerocopy_threshold": 0, 00:04:56.597 "tls_version": 0, 00:04:56.597 "enable_ktls": false 00:04:56.597 } 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "method": "sock_impl_set_options", 00:04:56.597 "params": { 00:04:56.597 "impl_name": "posix", 00:04:56.597 "recv_buf_size": 2097152, 00:04:56.597 "send_buf_size": 2097152, 00:04:56.597 "enable_recv_pipe": true, 00:04:56.597 "enable_quickack": false, 00:04:56.597 "enable_placement_id": 0, 00:04:56.597 "enable_zerocopy_send_server": true, 00:04:56.597 "enable_zerocopy_send_client": false, 00:04:56.597 "zerocopy_threshold": 0, 00:04:56.597 "tls_version": 0, 00:04:56.597 "enable_ktls": false 00:04:56.597 } 00:04:56.597 } 00:04:56.597 ] 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "vmd", 00:04:56.597 "config": [] 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "accel", 00:04:56.597 "config": [ 00:04:56.597 { 00:04:56.597 "method": "accel_set_options", 00:04:56.597 "params": { 00:04:56.597 "small_cache_size": 128, 00:04:56.597 "large_cache_size": 16, 00:04:56.597 "task_count": 2048, 00:04:56.597 "sequence_count": 2048, 00:04:56.597 "buf_count": 2048 00:04:56.597 } 00:04:56.597 } 00:04:56.597 ] 00:04:56.597 }, 00:04:56.597 { 00:04:56.597 "subsystem": "bdev", 00:04:56.597 "config": [ 00:04:56.597 { 00:04:56.597 "method": "bdev_set_options", 00:04:56.597 "params": { 00:04:56.597 "bdev_io_pool_size": 65535, 00:04:56.597 "bdev_io_cache_size": 256, 00:04:56.597 "bdev_auto_examine": true, 00:04:56.598 "iobuf_small_cache_size": 128, 00:04:56.598 "iobuf_large_cache_size": 16 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "bdev_raid_set_options", 00:04:56.598 "params": { 00:04:56.598 "process_window_size_kb": 1024, 00:04:56.598 "process_max_bandwidth_mb_sec": 0 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "bdev_iscsi_set_options", 00:04:56.598 "params": { 00:04:56.598 "timeout_sec": 30 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "bdev_nvme_set_options", 00:04:56.598 "params": { 00:04:56.598 "action_on_timeout": "none", 00:04:56.598 "timeout_us": 0, 00:04:56.598 "timeout_admin_us": 0, 00:04:56.598 "keep_alive_timeout_ms": 10000, 00:04:56.598 "arbitration_burst": 0, 00:04:56.598 "low_priority_weight": 0, 00:04:56.598 "medium_priority_weight": 0, 00:04:56.598 "high_priority_weight": 0, 00:04:56.598 "nvme_adminq_poll_period_us": 10000, 00:04:56.598 "nvme_ioq_poll_period_us": 0, 00:04:56.598 "io_queue_requests": 0, 00:04:56.598 "delay_cmd_submit": true, 00:04:56.598 "transport_retry_count": 4, 00:04:56.598 "bdev_retry_count": 3, 00:04:56.598 "transport_ack_timeout": 0, 00:04:56.598 "ctrlr_loss_timeout_sec": 0, 00:04:56.598 "reconnect_delay_sec": 0, 00:04:56.598 "fast_io_fail_timeout_sec": 0, 00:04:56.598 "disable_auto_failback": false, 00:04:56.598 "generate_uuids": false, 00:04:56.598 "transport_tos": 0, 00:04:56.598 "nvme_error_stat": false, 00:04:56.598 "rdma_srq_size": 0, 00:04:56.598 "io_path_stat": false, 00:04:56.598 "allow_accel_sequence": false, 00:04:56.598 "rdma_max_cq_size": 0, 00:04:56.598 "rdma_cm_event_timeout_ms": 0, 00:04:56.598 "dhchap_digests": [ 00:04:56.598 "sha256", 00:04:56.598 "sha384", 00:04:56.598 "sha512" 00:04:56.598 ], 00:04:56.598 "dhchap_dhgroups": [ 00:04:56.598 "null", 00:04:56.598 "ffdhe2048", 00:04:56.598 "ffdhe3072", 00:04:56.598 "ffdhe4096", 00:04:56.598 "ffdhe6144", 00:04:56.598 "ffdhe8192" 00:04:56.598 ] 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "bdev_nvme_set_hotplug", 00:04:56.598 "params": { 00:04:56.598 "period_us": 100000, 00:04:56.598 "enable": false 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "bdev_wait_for_examine" 00:04:56.598 } 00:04:56.598 ] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "scsi", 00:04:56.598 "config": null 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "scheduler", 00:04:56.598 "config": [ 00:04:56.598 { 00:04:56.598 "method": "framework_set_scheduler", 00:04:56.598 "params": { 00:04:56.598 "name": "static" 00:04:56.598 } 00:04:56.598 } 00:04:56.598 ] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "vhost_scsi", 00:04:56.598 "config": [] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "vhost_blk", 00:04:56.598 "config": [] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "ublk", 00:04:56.598 "config": [] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "nbd", 00:04:56.598 "config": [] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "nvmf", 00:04:56.598 "config": [ 00:04:56.598 { 00:04:56.598 "method": "nvmf_set_config", 00:04:56.598 "params": { 00:04:56.598 "discovery_filter": "match_any", 00:04:56.598 "admin_cmd_passthru": { 00:04:56.598 "identify_ctrlr": false 00:04:56.598 } 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "nvmf_set_max_subsystems", 00:04:56.598 "params": { 00:04:56.598 "max_subsystems": 1024 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "nvmf_set_crdt", 00:04:56.598 "params": { 00:04:56.598 "crdt1": 0, 00:04:56.598 "crdt2": 0, 00:04:56.598 "crdt3": 0 00:04:56.598 } 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "method": "nvmf_create_transport", 00:04:56.598 "params": { 00:04:56.598 "trtype": "TCP", 00:04:56.598 "max_queue_depth": 128, 00:04:56.598 "max_io_qpairs_per_ctrlr": 127, 00:04:56.598 "in_capsule_data_size": 4096, 00:04:56.598 "max_io_size": 131072, 00:04:56.598 "io_unit_size": 131072, 00:04:56.598 "max_aq_depth": 128, 00:04:56.598 "num_shared_buffers": 511, 00:04:56.598 "buf_cache_size": 4294967295, 00:04:56.598 "dif_insert_or_strip": false, 00:04:56.598 "zcopy": false, 00:04:56.598 "c2h_success": true, 00:04:56.598 "sock_priority": 0, 00:04:56.598 "abort_timeout_sec": 1, 00:04:56.598 "ack_timeout": 0, 00:04:56.598 "data_wr_pool_size": 0 00:04:56.598 } 00:04:56.598 } 00:04:56.598 ] 00:04:56.598 }, 00:04:56.598 { 00:04:56.598 "subsystem": "iscsi", 00:04:56.598 "config": [ 00:04:56.598 { 00:04:56.598 "method": "iscsi_set_options", 00:04:56.598 "params": { 00:04:56.598 "node_base": "iqn.2016-06.io.spdk", 00:04:56.598 "max_sessions": 128, 00:04:56.598 "max_connections_per_session": 2, 00:04:56.598 "max_queue_depth": 64, 00:04:56.598 "default_time2wait": 2, 00:04:56.598 "default_time2retain": 20, 00:04:56.598 "first_burst_length": 8192, 00:04:56.598 "immediate_data": true, 00:04:56.598 "allow_duplicated_isid": false, 00:04:56.598 "error_recovery_level": 0, 00:04:56.598 "nop_timeout": 60, 00:04:56.598 "nop_in_interval": 30, 00:04:56.598 "disable_chap": false, 00:04:56.598 "require_chap": false, 00:04:56.598 "mutual_chap": false, 00:04:56.598 "chap_group": 0, 00:04:56.598 "max_large_datain_per_connection": 64, 00:04:56.598 "max_r2t_per_connection": 4, 00:04:56.598 "pdu_pool_size": 36864, 00:04:56.598 "immediate_data_pool_size": 16384, 00:04:56.598 "data_out_pool_size": 2048 00:04:56.598 } 00:04:56.598 } 00:04:56.598 ] 00:04:56.598 } 00:04:56.598 ] 00:04:56.598 } 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 899288 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 899288 ']' 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 899288 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899288 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899288' 00:04:56.598 killing process with pid 899288 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 899288 00:04:56.598 02:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 899288 00:04:57.173 02:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=899432 00:04:57.173 02:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.173 02:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 899432 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 899432 ']' 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 899432 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.453 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899432 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899432' 00:05:02.454 killing process with pid 899432 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 899432 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 899432 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.454 00:05:02.454 real 0m6.484s 00:05:02.454 user 0m6.067s 00:05:02.454 sys 0m0.692s 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.454 ************************************ 00:05:02.454 END TEST skip_rpc_with_json 00:05:02.454 ************************************ 00:05:02.454 02:04:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.454 02:04:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.454 02:04:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.454 02:04:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.454 ************************************ 00:05:02.454 START TEST skip_rpc_with_delay 00:05:02.454 ************************************ 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.454 [2024-07-27 02:04:30.599700] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:02.454 [2024-07-27 02:04:30.599805] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.454 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.454 00:05:02.454 real 0m0.068s 00:05:02.454 user 0m0.048s 00:05:02.454 sys 0m0.019s 00:05:02.713 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.713 02:04:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:02.713 ************************************ 00:05:02.713 END TEST skip_rpc_with_delay 00:05:02.713 ************************************ 00:05:02.713 02:04:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:02.713 02:04:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:02.713 02:04:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:02.713 02:04:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.713 02:04:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.713 02:04:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.713 ************************************ 00:05:02.713 START TEST exit_on_failed_rpc_init 00:05:02.713 ************************************ 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=900142 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 900142 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 900142 ']' 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.713 02:04:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.713 [2024-07-27 02:04:30.717446] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:02.713 [2024-07-27 02:04:30.717559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900142 ] 00:05:02.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.713 [2024-07-27 02:04:30.748821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.713 [2024-07-27 02:04:30.780376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.713 [2024-07-27 02:04:30.870969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.972 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:03.230 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.230 [2024-07-27 02:04:31.186468] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:03.230 [2024-07-27 02:04:31.186552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900159 ] 00:05:03.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.230 [2024-07-27 02:04:31.216190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:03.230 [2024-07-27 02:04:31.247348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.230 [2024-07-27 02:04:31.341882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.230 [2024-07-27 02:04:31.342000] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.230 [2024-07-27 02:04:31.342023] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.230 [2024-07-27 02:04:31.342037] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 900142 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 900142 ']' 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 900142 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900142 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900142' 00:05:03.490 killing process with pid 900142 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 900142 00:05:03.490 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 900142 00:05:03.748 00:05:03.748 real 0m1.215s 00:05:03.748 user 0m1.313s 00:05:03.748 sys 0m0.457s 00:05:03.748 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.748 02:04:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.748 ************************************ 00:05:03.748 END TEST exit_on_failed_rpc_init 00:05:03.748 ************************************ 00:05:03.748 02:04:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.748 00:05:03.748 real 0m13.451s 00:05:03.748 user 0m12.653s 00:05:03.748 sys 0m1.647s 00:05:03.748 02:04:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.748 02:04:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.748 ************************************ 00:05:03.748 END TEST skip_rpc 00:05:03.748 ************************************ 00:05:04.043 02:04:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.043 02:04:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.043 02:04:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.043 02:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:04.043 ************************************ 00:05:04.043 START TEST rpc_client 00:05:04.043 ************************************ 00:05:04.043 02:04:31 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.043 * Looking for test storage... 00:05:04.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:04.043 02:04:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:04.043 OK 00:05:04.043 02:04:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.043 00:05:04.043 real 0m0.063s 00:05:04.043 user 0m0.021s 00:05:04.043 sys 0m0.047s 00:05:04.043 02:04:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.043 02:04:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:04.043 ************************************ 00:05:04.043 END TEST rpc_client 00:05:04.043 ************************************ 00:05:04.043 02:04:32 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.043 02:04:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.043 02:04:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.043 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.043 ************************************ 00:05:04.043 START TEST json_config 00:05:04.043 ************************************ 00:05:04.043 02:04:32 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.043 02:04:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.043 02:04:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.043 02:04:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.043 02:04:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.043 02:04:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.043 02:04:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.043 02:04:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.043 02:04:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@47 -- # : 0 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:04.043 02:04:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.043 02:04:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:04.044 INFO: JSON configuration test init 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.044 02:04:32 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.044 02:04:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.044 02:04:32 json_config -- json_config/common.sh@10 -- # shift 00:05:04.044 02:04:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.044 02:04:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.044 02:04:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.044 02:04:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.044 02:04:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.044 02:04:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=900403 00:05:04.044 02:04:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.044 02:04:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.044 Waiting for target to run... 00:05:04.044 02:04:32 json_config -- json_config/common.sh@25 -- # waitforlisten 900403 /var/tmp/spdk_tgt.sock 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 900403 ']' 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.044 02:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.044 [2024-07-27 02:04:32.169781] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:04.044 [2024-07-27 02:04:32.169881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900403 ] 00:05:04.303 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.563 [2024-07-27 02:04:32.628643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:04.563 [2024-07-27 02:04:32.663139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.822 [2024-07-27 02:04:32.745038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:05.080 02:04:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.080 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.080 02:04:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.080 02:04:33 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:05.080 02:04:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.373 02:04:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.373 02:04:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:08.373 02:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@51 -- # sort 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:08.373 02:04:36 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:08.373 02:04:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.373 02:04:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:08.634 02:04:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.634 02:04:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:08.634 02:04:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.634 02:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.634 MallocForNvmf0 00:05:08.893 02:04:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.893 02:04:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.893 MallocForNvmf1 00:05:08.893 02:04:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.893 02:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.152 [2024-07-27 02:04:37.274127] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.152 02:04:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.152 02:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.410 02:04:37 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.410 02:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.668 02:04:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.668 02:04:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.926 02:04:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.926 02:04:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.184 [2024-07-27 02:04:38.265405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.184 02:04:38 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:10.184 02:04:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.184 02:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.184 02:04:38 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:10.184 02:04:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.184 02:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.184 02:04:38 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:10.184 02:04:38 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.184 02:04:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.443 MallocBdevForConfigChangeCheck 00:05:10.443 02:04:38 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:10.443 02:04:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.443 02:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.443 02:04:38 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:10.443 02:04:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.009 02:04:38 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:11.009 INFO: shutting down applications... 00:05:11.009 02:04:38 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:11.009 02:04:38 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:11.009 02:04:38 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:11.009 02:04:38 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.917 Calling clear_iscsi_subsystem 00:05:12.917 Calling clear_nvmf_subsystem 00:05:12.917 Calling clear_nbd_subsystem 00:05:12.917 Calling clear_ublk_subsystem 00:05:12.917 Calling clear_vhost_blk_subsystem 00:05:12.917 Calling clear_vhost_scsi_subsystem 00:05:12.917 Calling clear_bdev_subsystem 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@349 -- # break 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:12.917 02:04:40 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:12.917 02:04:40 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.917 02:04:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.917 02:04:40 json_config -- json_config/common.sh@35 -- # [[ -n 900403 ]] 00:05:12.917 02:04:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 900403 00:05:12.917 02:04:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.918 02:04:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.918 02:04:40 json_config -- json_config/common.sh@41 -- # kill -0 900403 00:05:12.918 02:04:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.483 02:04:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.483 02:04:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.483 02:04:41 json_config -- json_config/common.sh@41 -- # kill -0 900403 00:05:13.483 02:04:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.483 02:04:41 json_config -- json_config/common.sh@43 -- # break 00:05:13.483 02:04:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.483 02:04:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.483 SPDK target shutdown done 00:05:13.483 02:04:41 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:13.483 INFO: relaunching applications... 00:05:13.483 02:04:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.483 02:04:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.483 02:04:41 json_config -- json_config/common.sh@10 -- # shift 00:05:13.483 02:04:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.483 02:04:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.483 02:04:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.483 02:04:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.483 02:04:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.483 02:04:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=901606 00:05:13.483 02:04:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.483 02:04:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.483 Waiting for target to run... 00:05:13.483 02:04:41 json_config -- json_config/common.sh@25 -- # waitforlisten 901606 /var/tmp/spdk_tgt.sock 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 901606 ']' 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.483 02:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.483 [2024-07-27 02:04:41.560181] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:13.483 [2024-07-27 02:04:41.560265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901606 ] 00:05:13.483 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.049 [2024-07-27 02:04:42.063813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.049 [2024-07-27 02:04:42.097611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.049 [2024-07-27 02:04:42.179630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.333 [2024-07-27 02:04:45.210733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.333 [2024-07-27 02:04:45.243224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.899 02:04:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.899 02:04:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:17.899 02:04:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.899 00:05:17.899 02:04:45 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:17.899 02:04:45 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:17.899 INFO: Checking if target configuration is the same... 00:05:17.899 02:04:45 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.899 02:04:45 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:17.899 02:04:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.899 + '[' 2 -ne 2 ']' 00:05:17.899 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:17.899 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:17.899 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.899 +++ basename /dev/fd/62 00:05:17.899 ++ mktemp /tmp/62.XXX 00:05:17.899 + tmp_file_1=/tmp/62.NWl 00:05:17.899 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.899 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.899 + tmp_file_2=/tmp/spdk_tgt_config.json.kSy 00:05:17.899 + ret=0 00:05:17.899 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.467 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.467 + diff -u /tmp/62.NWl /tmp/spdk_tgt_config.json.kSy 00:05:18.467 + echo 'INFO: JSON config files are the same' 00:05:18.467 INFO: JSON config files are the same 00:05:18.467 + rm /tmp/62.NWl /tmp/spdk_tgt_config.json.kSy 00:05:18.467 + exit 0 00:05:18.467 02:04:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:18.467 02:04:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.467 INFO: changing configuration and checking if this can be detected... 00:05:18.467 02:04:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.467 02:04:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.725 02:04:46 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.725 02:04:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:18.725 02:04:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.725 + '[' 2 -ne 2 ']' 00:05:18.725 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.725 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.725 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.725 +++ basename /dev/fd/62 00:05:18.725 ++ mktemp /tmp/62.XXX 00:05:18.725 + tmp_file_1=/tmp/62.PLj 00:05:18.725 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.725 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.725 + tmp_file_2=/tmp/spdk_tgt_config.json.1By 00:05:18.725 + ret=0 00:05:18.725 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.983 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.983 + diff -u /tmp/62.PLj /tmp/spdk_tgt_config.json.1By 00:05:18.983 + ret=1 00:05:18.983 + echo '=== Start of file: /tmp/62.PLj ===' 00:05:18.983 + cat /tmp/62.PLj 00:05:18.983 + echo '=== End of file: /tmp/62.PLj ===' 00:05:18.983 + echo '' 00:05:18.983 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1By ===' 00:05:18.983 + cat /tmp/spdk_tgt_config.json.1By 00:05:18.983 + echo '=== End of file: /tmp/spdk_tgt_config.json.1By ===' 00:05:18.983 + echo '' 00:05:18.983 + rm /tmp/62.PLj /tmp/spdk_tgt_config.json.1By 00:05:18.983 + exit 1 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:18.983 INFO: configuration change detected. 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@321 -- # [[ -n 901606 ]] 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:18.983 02:04:47 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.983 02:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.243 02:04:47 json_config -- json_config/json_config.sh@327 -- # killprocess 901606 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@950 -- # '[' -z 901606 ']' 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@954 -- # kill -0 901606 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@955 -- # uname 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 901606 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 901606' 00:05:19.243 killing process with pid 901606 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@969 -- # kill 901606 00:05:19.243 02:04:47 json_config -- common/autotest_common.sh@974 -- # wait 901606 00:05:20.622 02:04:48 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.622 02:04:48 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:20.622 02:04:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.622 02:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.881 02:04:48 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:20.881 02:04:48 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:20.881 INFO: Success 00:05:20.881 00:05:20.881 real 0m16.745s 00:05:20.881 user 0m18.525s 00:05:20.881 sys 0m2.245s 00:05:20.881 02:04:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.881 02:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.881 ************************************ 00:05:20.881 END TEST json_config 00:05:20.881 ************************************ 00:05:20.881 02:04:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.882 02:04:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.882 02:04:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.882 02:04:48 -- common/autotest_common.sh@10 -- # set +x 00:05:20.882 ************************************ 00:05:20.882 START TEST json_config_extra_key 00:05:20.882 ************************************ 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.882 02:04:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.882 02:04:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.882 02:04:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.882 02:04:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.882 02:04:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.882 02:04:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.882 02:04:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:20.882 02:04:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:20.882 02:04:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:20.882 INFO: launching applications... 00:05:20.882 02:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=902632 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.882 Waiting for target to run... 00:05:20.882 02:04:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 902632 /var/tmp/spdk_tgt.sock 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 902632 ']' 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.882 02:04:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.882 [2024-07-27 02:04:48.948733] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:20.882 [2024-07-27 02:04:48.948814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902632 ] 00:05:20.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.142 [2024-07-27 02:04:49.246513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.142 [2024-07-27 02:04:49.280250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.401 [2024-07-27 02:04:49.344386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.973 02:04:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.973 02:04:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:21.973 00:05:21.973 02:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:21.973 INFO: shutting down applications... 00:05:21.973 02:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 902632 ]] 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 902632 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 902632 00:05:21.973 02:04:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 902632 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.263 02:04:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.263 SPDK target shutdown done 00:05:22.263 02:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.263 Success 00:05:22.263 00:05:22.263 real 0m1.542s 00:05:22.263 user 0m1.501s 00:05:22.263 sys 0m0.434s 00:05:22.263 02:04:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.263 02:04:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.263 ************************************ 00:05:22.263 END TEST json_config_extra_key 00:05:22.263 ************************************ 00:05:22.522 02:04:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.522 02:04:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.522 02:04:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.522 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.522 ************************************ 00:05:22.522 START TEST alias_rpc 00:05:22.522 ************************************ 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.522 * Looking for test storage... 00:05:22.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.522 02:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.522 02:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=902834 00:05:22.522 02:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.522 02:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 902834 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 902834 ']' 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.522 02:04:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.522 [2024-07-27 02:04:50.541774] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:22.522 [2024-07-27 02:04:50.541867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902834 ] 00:05:22.522 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.522 [2024-07-27 02:04:50.574924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.522 [2024-07-27 02:04:50.606631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.782 [2024-07-27 02:04:50.700014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.042 02:04:50 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.042 02:04:50 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:23.042 02:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.301 02:04:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 902834 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 902834 ']' 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 902834 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 902834 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 902834' 00:05:23.301 killing process with pid 902834 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 902834 00:05:23.301 02:04:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 902834 00:05:23.560 00:05:23.560 real 0m1.207s 00:05:23.560 user 0m1.282s 00:05:23.560 sys 0m0.425s 00:05:23.560 02:04:51 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.560 02:04:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.560 ************************************ 00:05:23.560 END TEST alias_rpc 00:05:23.560 ************************************ 00:05:23.560 02:04:51 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:23.560 02:04:51 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.560 02:04:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.560 02:04:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.560 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.560 ************************************ 00:05:23.560 START TEST spdkcli_tcp 00:05:23.560 ************************************ 00:05:23.560 02:04:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.820 * Looking for test storage... 00:05:23.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:23.820 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:23.820 02:04:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:23.820 02:04:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=903121 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.821 02:04:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 903121 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 903121 ']' 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.821 02:04:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.821 [2024-07-27 02:04:51.795628] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:23.821 [2024-07-27 02:04:51.795723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903121 ] 00:05:23.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.821 [2024-07-27 02:04:51.826908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.821 [2024-07-27 02:04:51.853531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.821 [2024-07-27 02:04:51.938575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.821 [2024-07-27 02:04:51.938579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.079 02:04:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.079 02:04:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:24.079 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=903138 00:05:24.079 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.079 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:24.338 [ 00:05:24.338 "bdev_malloc_delete", 00:05:24.338 "bdev_malloc_create", 00:05:24.338 "bdev_null_resize", 00:05:24.338 "bdev_null_delete", 00:05:24.338 "bdev_null_create", 00:05:24.338 "bdev_nvme_cuse_unregister", 00:05:24.338 "bdev_nvme_cuse_register", 00:05:24.338 "bdev_opal_new_user", 00:05:24.338 "bdev_opal_set_lock_state", 00:05:24.338 "bdev_opal_delete", 00:05:24.338 "bdev_opal_get_info", 00:05:24.338 "bdev_opal_create", 00:05:24.338 "bdev_nvme_opal_revert", 00:05:24.338 "bdev_nvme_opal_init", 00:05:24.338 "bdev_nvme_send_cmd", 00:05:24.338 "bdev_nvme_get_path_iostat", 00:05:24.338 "bdev_nvme_get_mdns_discovery_info", 00:05:24.338 "bdev_nvme_stop_mdns_discovery", 00:05:24.338 "bdev_nvme_start_mdns_discovery", 00:05:24.338 "bdev_nvme_set_multipath_policy", 00:05:24.338 "bdev_nvme_set_preferred_path", 00:05:24.338 "bdev_nvme_get_io_paths", 00:05:24.338 "bdev_nvme_remove_error_injection", 00:05:24.338 "bdev_nvme_add_error_injection", 00:05:24.338 "bdev_nvme_get_discovery_info", 00:05:24.338 "bdev_nvme_stop_discovery", 00:05:24.338 "bdev_nvme_start_discovery", 00:05:24.338 "bdev_nvme_get_controller_health_info", 00:05:24.338 "bdev_nvme_disable_controller", 00:05:24.338 "bdev_nvme_enable_controller", 00:05:24.338 "bdev_nvme_reset_controller", 00:05:24.338 "bdev_nvme_get_transport_statistics", 00:05:24.338 "bdev_nvme_apply_firmware", 00:05:24.338 "bdev_nvme_detach_controller", 00:05:24.338 "bdev_nvme_get_controllers", 00:05:24.338 "bdev_nvme_attach_controller", 00:05:24.338 "bdev_nvme_set_hotplug", 00:05:24.338 "bdev_nvme_set_options", 00:05:24.338 "bdev_passthru_delete", 00:05:24.338 "bdev_passthru_create", 00:05:24.338 "bdev_lvol_set_parent_bdev", 00:05:24.338 "bdev_lvol_set_parent", 00:05:24.338 "bdev_lvol_check_shallow_copy", 00:05:24.338 "bdev_lvol_start_shallow_copy", 00:05:24.338 "bdev_lvol_grow_lvstore", 00:05:24.338 "bdev_lvol_get_lvols", 00:05:24.338 "bdev_lvol_get_lvstores", 00:05:24.338 "bdev_lvol_delete", 00:05:24.338 "bdev_lvol_set_read_only", 00:05:24.338 "bdev_lvol_resize", 00:05:24.338 "bdev_lvol_decouple_parent", 00:05:24.338 "bdev_lvol_inflate", 00:05:24.338 "bdev_lvol_rename", 00:05:24.338 "bdev_lvol_clone_bdev", 00:05:24.338 "bdev_lvol_clone", 00:05:24.338 "bdev_lvol_snapshot", 00:05:24.338 "bdev_lvol_create", 00:05:24.338 "bdev_lvol_delete_lvstore", 00:05:24.338 "bdev_lvol_rename_lvstore", 00:05:24.338 "bdev_lvol_create_lvstore", 00:05:24.338 "bdev_raid_set_options", 00:05:24.338 "bdev_raid_remove_base_bdev", 00:05:24.338 "bdev_raid_add_base_bdev", 00:05:24.338 "bdev_raid_delete", 00:05:24.338 "bdev_raid_create", 00:05:24.338 "bdev_raid_get_bdevs", 00:05:24.338 "bdev_error_inject_error", 00:05:24.338 "bdev_error_delete", 00:05:24.338 "bdev_error_create", 00:05:24.338 "bdev_split_delete", 00:05:24.338 "bdev_split_create", 00:05:24.338 "bdev_delay_delete", 00:05:24.338 "bdev_delay_create", 00:05:24.338 "bdev_delay_update_latency", 00:05:24.338 "bdev_zone_block_delete", 00:05:24.338 "bdev_zone_block_create", 00:05:24.338 "blobfs_create", 00:05:24.338 "blobfs_detect", 00:05:24.338 "blobfs_set_cache_size", 00:05:24.338 "bdev_aio_delete", 00:05:24.338 "bdev_aio_rescan", 00:05:24.338 "bdev_aio_create", 00:05:24.338 "bdev_ftl_set_property", 00:05:24.338 "bdev_ftl_get_properties", 00:05:24.338 "bdev_ftl_get_stats", 00:05:24.338 "bdev_ftl_unmap", 00:05:24.338 "bdev_ftl_unload", 00:05:24.338 "bdev_ftl_delete", 00:05:24.338 "bdev_ftl_load", 00:05:24.338 "bdev_ftl_create", 00:05:24.338 "bdev_virtio_attach_controller", 00:05:24.338 "bdev_virtio_scsi_get_devices", 00:05:24.338 "bdev_virtio_detach_controller", 00:05:24.338 "bdev_virtio_blk_set_hotplug", 00:05:24.338 "bdev_iscsi_delete", 00:05:24.338 "bdev_iscsi_create", 00:05:24.338 "bdev_iscsi_set_options", 00:05:24.338 "accel_error_inject_error", 00:05:24.338 "ioat_scan_accel_module", 00:05:24.338 "dsa_scan_accel_module", 00:05:24.338 "iaa_scan_accel_module", 00:05:24.338 "vfu_virtio_create_scsi_endpoint", 00:05:24.338 "vfu_virtio_scsi_remove_target", 00:05:24.338 "vfu_virtio_scsi_add_target", 00:05:24.338 "vfu_virtio_create_blk_endpoint", 00:05:24.338 "vfu_virtio_delete_endpoint", 00:05:24.338 "keyring_file_remove_key", 00:05:24.338 "keyring_file_add_key", 00:05:24.338 "keyring_linux_set_options", 00:05:24.338 "iscsi_get_histogram", 00:05:24.338 "iscsi_enable_histogram", 00:05:24.338 "iscsi_set_options", 00:05:24.338 "iscsi_get_auth_groups", 00:05:24.338 "iscsi_auth_group_remove_secret", 00:05:24.338 "iscsi_auth_group_add_secret", 00:05:24.338 "iscsi_delete_auth_group", 00:05:24.338 "iscsi_create_auth_group", 00:05:24.338 "iscsi_set_discovery_auth", 00:05:24.338 "iscsi_get_options", 00:05:24.338 "iscsi_target_node_request_logout", 00:05:24.339 "iscsi_target_node_set_redirect", 00:05:24.339 "iscsi_target_node_set_auth", 00:05:24.339 "iscsi_target_node_add_lun", 00:05:24.339 "iscsi_get_stats", 00:05:24.339 "iscsi_get_connections", 00:05:24.339 "iscsi_portal_group_set_auth", 00:05:24.339 "iscsi_start_portal_group", 00:05:24.339 "iscsi_delete_portal_group", 00:05:24.339 "iscsi_create_portal_group", 00:05:24.339 "iscsi_get_portal_groups", 00:05:24.339 "iscsi_delete_target_node", 00:05:24.339 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.339 "iscsi_target_node_add_pg_ig_maps", 00:05:24.339 "iscsi_create_target_node", 00:05:24.339 "iscsi_get_target_nodes", 00:05:24.339 "iscsi_delete_initiator_group", 00:05:24.339 "iscsi_initiator_group_remove_initiators", 00:05:24.339 "iscsi_initiator_group_add_initiators", 00:05:24.339 "iscsi_create_initiator_group", 00:05:24.339 "iscsi_get_initiator_groups", 00:05:24.339 "nvmf_set_crdt", 00:05:24.339 "nvmf_set_config", 00:05:24.339 "nvmf_set_max_subsystems", 00:05:24.339 "nvmf_stop_mdns_prr", 00:05:24.339 "nvmf_publish_mdns_prr", 00:05:24.339 "nvmf_subsystem_get_listeners", 00:05:24.339 "nvmf_subsystem_get_qpairs", 00:05:24.339 "nvmf_subsystem_get_controllers", 00:05:24.339 "nvmf_get_stats", 00:05:24.339 "nvmf_get_transports", 00:05:24.339 "nvmf_create_transport", 00:05:24.339 "nvmf_get_targets", 00:05:24.339 "nvmf_delete_target", 00:05:24.339 "nvmf_create_target", 00:05:24.339 "nvmf_subsystem_allow_any_host", 00:05:24.339 "nvmf_subsystem_remove_host", 00:05:24.339 "nvmf_subsystem_add_host", 00:05:24.339 "nvmf_ns_remove_host", 00:05:24.339 "nvmf_ns_add_host", 00:05:24.339 "nvmf_subsystem_remove_ns", 00:05:24.339 "nvmf_subsystem_add_ns", 00:05:24.339 "nvmf_subsystem_listener_set_ana_state", 00:05:24.339 "nvmf_discovery_get_referrals", 00:05:24.339 "nvmf_discovery_remove_referral", 00:05:24.339 "nvmf_discovery_add_referral", 00:05:24.339 "nvmf_subsystem_remove_listener", 00:05:24.339 "nvmf_subsystem_add_listener", 00:05:24.339 "nvmf_delete_subsystem", 00:05:24.339 "nvmf_create_subsystem", 00:05:24.339 "nvmf_get_subsystems", 00:05:24.339 "env_dpdk_get_mem_stats", 00:05:24.339 "nbd_get_disks", 00:05:24.339 "nbd_stop_disk", 00:05:24.339 "nbd_start_disk", 00:05:24.339 "ublk_recover_disk", 00:05:24.339 "ublk_get_disks", 00:05:24.339 "ublk_stop_disk", 00:05:24.339 "ublk_start_disk", 00:05:24.339 "ublk_destroy_target", 00:05:24.339 "ublk_create_target", 00:05:24.339 "virtio_blk_create_transport", 00:05:24.339 "virtio_blk_get_transports", 00:05:24.339 "vhost_controller_set_coalescing", 00:05:24.339 "vhost_get_controllers", 00:05:24.339 "vhost_delete_controller", 00:05:24.339 "vhost_create_blk_controller", 00:05:24.339 "vhost_scsi_controller_remove_target", 00:05:24.339 "vhost_scsi_controller_add_target", 00:05:24.339 "vhost_start_scsi_controller", 00:05:24.339 "vhost_create_scsi_controller", 00:05:24.339 "thread_set_cpumask", 00:05:24.339 "framework_get_governor", 00:05:24.339 "framework_get_scheduler", 00:05:24.339 "framework_set_scheduler", 00:05:24.339 "framework_get_reactors", 00:05:24.339 "thread_get_io_channels", 00:05:24.339 "thread_get_pollers", 00:05:24.339 "thread_get_stats", 00:05:24.339 "framework_monitor_context_switch", 00:05:24.339 "spdk_kill_instance", 00:05:24.339 "log_enable_timestamps", 00:05:24.339 "log_get_flags", 00:05:24.339 "log_clear_flag", 00:05:24.339 "log_set_flag", 00:05:24.339 "log_get_level", 00:05:24.339 "log_set_level", 00:05:24.339 "log_get_print_level", 00:05:24.339 "log_set_print_level", 00:05:24.339 "framework_enable_cpumask_locks", 00:05:24.339 "framework_disable_cpumask_locks", 00:05:24.339 "framework_wait_init", 00:05:24.339 "framework_start_init", 00:05:24.339 "scsi_get_devices", 00:05:24.339 "bdev_get_histogram", 00:05:24.339 "bdev_enable_histogram", 00:05:24.339 "bdev_set_qos_limit", 00:05:24.339 "bdev_set_qd_sampling_period", 00:05:24.339 "bdev_get_bdevs", 00:05:24.339 "bdev_reset_iostat", 00:05:24.339 "bdev_get_iostat", 00:05:24.339 "bdev_examine", 00:05:24.339 "bdev_wait_for_examine", 00:05:24.339 "bdev_set_options", 00:05:24.339 "notify_get_notifications", 00:05:24.339 "notify_get_types", 00:05:24.339 "accel_get_stats", 00:05:24.339 "accel_set_options", 00:05:24.339 "accel_set_driver", 00:05:24.339 "accel_crypto_key_destroy", 00:05:24.339 "accel_crypto_keys_get", 00:05:24.339 "accel_crypto_key_create", 00:05:24.339 "accel_assign_opc", 00:05:24.339 "accel_get_module_info", 00:05:24.339 "accel_get_opc_assignments", 00:05:24.339 "vmd_rescan", 00:05:24.339 "vmd_remove_device", 00:05:24.339 "vmd_enable", 00:05:24.339 "sock_get_default_impl", 00:05:24.339 "sock_set_default_impl", 00:05:24.339 "sock_impl_set_options", 00:05:24.339 "sock_impl_get_options", 00:05:24.339 "iobuf_get_stats", 00:05:24.339 "iobuf_set_options", 00:05:24.339 "keyring_get_keys", 00:05:24.339 "framework_get_pci_devices", 00:05:24.339 "framework_get_config", 00:05:24.339 "framework_get_subsystems", 00:05:24.339 "vfu_tgt_set_base_path", 00:05:24.339 "trace_get_info", 00:05:24.339 "trace_get_tpoint_group_mask", 00:05:24.339 "trace_disable_tpoint_group", 00:05:24.339 "trace_enable_tpoint_group", 00:05:24.339 "trace_clear_tpoint_mask", 00:05:24.339 "trace_set_tpoint_mask", 00:05:24.339 "spdk_get_version", 00:05:24.339 "rpc_get_methods" 00:05:24.339 ] 00:05:24.339 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.339 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.339 02:04:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 903121 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 903121 ']' 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 903121 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903121 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903121' 00:05:24.339 killing process with pid 903121 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 903121 00:05:24.339 02:04:52 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 903121 00:05:24.908 00:05:24.908 real 0m1.181s 00:05:24.908 user 0m2.088s 00:05:24.908 sys 0m0.452s 00:05:24.908 02:04:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.908 02:04:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.908 ************************************ 00:05:24.908 END TEST spdkcli_tcp 00:05:24.908 ************************************ 00:05:24.908 02:04:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.908 02:04:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.908 02:04:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.908 02:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.908 ************************************ 00:05:24.908 START TEST dpdk_mem_utility 00:05:24.908 ************************************ 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.908 * Looking for test storage... 00:05:24.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:24.908 02:04:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.908 02:04:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=903334 00:05:24.908 02:04:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.908 02:04:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 903334 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 903334 ']' 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.908 02:04:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.908 [2024-07-27 02:04:53.021732] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:24.908 [2024-07-27 02:04:53.021826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903334 ] 00:05:24.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.908 [2024-07-27 02:04:53.053205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.168 [2024-07-27 02:04:53.080790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.168 [2024-07-27 02:04:53.165227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.426 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.426 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:25.426 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.426 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.426 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.426 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.426 { 00:05:25.426 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.426 } 00:05:25.426 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.426 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.426 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:25.426 1 heaps totaling size 814.000000 MiB 00:05:25.426 size: 814.000000 MiB heap id: 0 00:05:25.426 end heaps---------- 00:05:25.426 8 mempools totaling size 598.116089 MiB 00:05:25.426 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.426 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.426 size: 84.521057 MiB name: bdev_io_903334 00:05:25.426 size: 51.011292 MiB name: evtpool_903334 00:05:25.426 size: 50.003479 MiB name: msgpool_903334 00:05:25.426 size: 21.763794 MiB name: PDU_Pool 00:05:25.426 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.426 size: 0.026123 MiB name: Session_Pool 00:05:25.426 end mempools------- 00:05:25.426 6 memzones totaling size 4.142822 MiB 00:05:25.426 size: 1.000366 MiB name: RG_ring_0_903334 00:05:25.426 size: 1.000366 MiB name: RG_ring_1_903334 00:05:25.426 size: 1.000366 MiB name: RG_ring_4_903334 00:05:25.426 size: 1.000366 MiB name: RG_ring_5_903334 00:05:25.426 size: 0.125366 MiB name: RG_ring_2_903334 00:05:25.426 size: 0.015991 MiB name: RG_ring_3_903334 00:05:25.426 end memzones------- 00:05:25.427 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.427 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:25.427 list of free elements. size: 12.519348 MiB 00:05:25.427 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.427 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.427 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.427 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.427 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.427 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.427 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.427 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.427 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:25.427 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:25.427 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:25.427 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:25.427 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.427 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:25.427 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:25.427 list of standard malloc elements. size: 199.218079 MiB 00:05:25.427 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.427 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.427 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.427 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.427 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.427 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.427 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.427 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.427 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.427 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.427 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.427 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.427 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.427 list of memzone associated elements. size: 602.262573 MiB 00:05:25.427 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.427 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.427 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.427 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.427 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.427 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_903334_0 00:05:25.427 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.427 associated memzone info: size: 48.002930 MiB name: MP_evtpool_903334_0 00:05:25.427 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.427 associated memzone info: size: 48.002930 MiB name: MP_msgpool_903334_0 00:05:25.427 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.427 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.427 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.427 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.427 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.427 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_903334 00:05:25.427 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.427 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_903334 00:05:25.427 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.427 associated memzone info: size: 1.007996 MiB name: MP_evtpool_903334 00:05:25.427 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.427 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.427 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.427 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.427 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.427 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.427 associated memzone info: size: 1.000366 MiB name: RG_ring_0_903334 00:05:25.427 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.427 associated memzone info: size: 1.000366 MiB name: RG_ring_1_903334 00:05:25.427 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.427 associated memzone info: size: 1.000366 MiB name: RG_ring_4_903334 00:05:25.427 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.427 associated memzone info: size: 1.000366 MiB name: RG_ring_5_903334 00:05:25.427 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.427 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_903334 00:05:25.427 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.427 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.427 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.427 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.427 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.427 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.427 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.427 associated memzone info: size: 0.125366 MiB name: RG_ring_2_903334 00:05:25.427 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.427 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.427 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:25.427 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.427 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.427 associated memzone info: size: 0.015991 MiB name: RG_ring_3_903334 00:05:25.427 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:25.427 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.427 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:25.427 associated memzone info: size: 0.000183 MiB name: MP_msgpool_903334 00:05:25.427 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.427 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_903334 00:05:25.427 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:25.427 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.427 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.427 02:04:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 903334 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 903334 ']' 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 903334 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903334 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903334' 00:05:25.427 killing process with pid 903334 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 903334 00:05:25.427 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 903334 00:05:25.995 00:05:25.995 real 0m1.040s 00:05:25.995 user 0m0.993s 00:05:25.995 sys 0m0.417s 00:05:25.995 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.995 02:04:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.995 ************************************ 00:05:25.995 END TEST dpdk_mem_utility 00:05:25.995 ************************************ 00:05:25.995 02:04:53 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.995 02:04:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.995 02:04:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.995 02:04:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.995 ************************************ 00:05:25.995 START TEST event 00:05:25.995 ************************************ 00:05:25.995 02:04:53 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.995 * Looking for test storage... 00:05:25.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:25.995 02:04:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:25.995 02:04:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.995 02:04:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.995 02:04:54 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:25.995 02:04:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.995 02:04:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.995 ************************************ 00:05:25.995 START TEST event_perf 00:05:25.995 ************************************ 00:05:25.995 02:04:54 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.995 Running I/O for 1 seconds...[2024-07-27 02:04:54.085235] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:25.995 [2024-07-27 02:04:54.085299] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903524 ] 00:05:25.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.995 [2024-07-27 02:04:54.120674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.995 [2024-07-27 02:04:54.150771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.254 [2024-07-27 02:04:54.243969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.254 [2024-07-27 02:04:54.244021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.254 [2024-07-27 02:04:54.244136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.254 [2024-07-27 02:04:54.244140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.190 Running I/O for 1 seconds... 00:05:27.190 lcore 0: 232085 00:05:27.190 lcore 1: 232085 00:05:27.190 lcore 2: 232085 00:05:27.190 lcore 3: 232085 00:05:27.190 done. 00:05:27.190 00:05:27.190 real 0m1.255s 00:05:27.190 user 0m4.164s 00:05:27.190 sys 0m0.087s 00:05:27.190 02:04:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.190 02:04:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.190 ************************************ 00:05:27.190 END TEST event_perf 00:05:27.190 ************************************ 00:05:27.190 02:04:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.190 02:04:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:27.190 02:04:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.190 02:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.450 ************************************ 00:05:27.450 START TEST event_reactor 00:05:27.450 ************************************ 00:05:27.450 02:04:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.450 [2024-07-27 02:04:55.389614] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:27.450 [2024-07-27 02:04:55.389682] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903680 ] 00:05:27.450 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.450 [2024-07-27 02:04:55.423460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.450 [2024-07-27 02:04:55.454348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.450 [2024-07-27 02:04:55.547590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.829 test_start 00:05:28.829 oneshot 00:05:28.829 tick 100 00:05:28.829 tick 100 00:05:28.829 tick 250 00:05:28.829 tick 100 00:05:28.829 tick 100 00:05:28.829 tick 100 00:05:28.829 tick 250 00:05:28.829 tick 500 00:05:28.829 tick 100 00:05:28.829 tick 100 00:05:28.829 tick 250 00:05:28.829 tick 100 00:05:28.829 tick 100 00:05:28.829 test_end 00:05:28.829 00:05:28.829 real 0m1.253s 00:05:28.829 user 0m1.159s 00:05:28.829 sys 0m0.089s 00:05:28.829 02:04:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.829 02:04:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.829 ************************************ 00:05:28.829 END TEST event_reactor 00:05:28.829 ************************************ 00:05:28.829 02:04:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.829 02:04:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:28.829 02:04:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.829 02:04:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.829 ************************************ 00:05:28.829 START TEST event_reactor_perf 00:05:28.829 ************************************ 00:05:28.829 02:04:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.829 [2024-07-27 02:04:56.685372] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:28.829 [2024-07-27 02:04:56.685453] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903839 ] 00:05:28.829 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.829 [2024-07-27 02:04:56.718086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.829 [2024-07-27 02:04:56.748125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.829 [2024-07-27 02:04:56.840593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.767 test_start 00:05:29.767 test_end 00:05:29.767 Performance: 356071 events per second 00:05:29.767 00:05:29.767 real 0m1.250s 00:05:29.767 user 0m1.163s 00:05:29.767 sys 0m0.083s 00:05:29.767 02:04:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.767 02:04:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.767 ************************************ 00:05:29.767 END TEST event_reactor_perf 00:05:29.767 ************************************ 00:05:30.026 02:04:57 event -- event/event.sh@49 -- # uname -s 00:05:30.026 02:04:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.026 02:04:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.026 02:04:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.026 02:04:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.026 02:04:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.026 ************************************ 00:05:30.026 START TEST event_scheduler 00:05:30.026 ************************************ 00:05:30.026 02:04:57 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.026 * Looking for test storage... 00:05:30.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:30.026 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.026 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=904021 00:05:30.026 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.026 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.026 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 904021 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 904021 ']' 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.026 02:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.026 [2024-07-27 02:04:58.064845] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:30.026 [2024-07-27 02:04:58.064922] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904021 ] 00:05:30.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.026 [2024-07-27 02:04:58.096129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.026 [2024-07-27 02:04:58.122296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.286 [2024-07-27 02:04:58.209959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.286 [2024-07-27 02:04:58.210024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.286 [2024-07-27 02:04:58.210090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.286 [2024-07-27 02:04:58.210094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:30.286 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 [2024-07-27 02:04:58.266871] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:30.286 [2024-07-27 02:04:58.266897] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.286 [2024-07-27 02:04:58.266913] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.286 [2024-07-27 02:04:58.266924] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.286 [2024-07-27 02:04:58.266934] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 [2024-07-27 02:04:58.357765] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 ************************************ 00:05:30.286 START TEST scheduler_create_thread 00:05:30.286 ************************************ 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 2 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 3 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 4 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 5 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 6 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 7 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.286 8 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.286 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.546 9 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.546 10 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.546 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.116 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.116 00:05:31.116 real 0m0.588s 00:05:31.116 user 0m0.006s 00:05:31.116 sys 0m0.007s 00:05:31.116 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.116 02:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.117 ************************************ 00:05:31.117 END TEST scheduler_create_thread 00:05:31.117 ************************************ 00:05:31.117 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.117 02:04:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 904021 00:05:31.117 02:04:58 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 904021 ']' 00:05:31.117 02:04:58 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 904021 00:05:31.117 02:04:58 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:31.117 02:04:58 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.117 02:04:58 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 904021 00:05:31.117 02:04:59 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:31.117 02:04:59 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:31.117 02:04:59 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 904021' 00:05:31.117 killing process with pid 904021 00:05:31.117 02:04:59 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 904021 00:05:31.117 02:04:59 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 904021 00:05:31.374 [2024-07-27 02:04:59.449848] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.632 00:05:31.632 real 0m1.685s 00:05:31.632 user 0m2.146s 00:05:31.632 sys 0m0.311s 00:05:31.632 02:04:59 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.632 02:04:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.632 ************************************ 00:05:31.632 END TEST event_scheduler 00:05:31.632 ************************************ 00:05:31.632 02:04:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.632 02:04:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.632 02:04:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.632 02:04:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.632 02:04:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.632 ************************************ 00:05:31.632 START TEST app_repeat 00:05:31.632 ************************************ 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=904324 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 904324' 00:05:31.632 Process app_repeat pid: 904324 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.632 spdk_app_start Round 0 00:05:31.632 02:04:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 904324 /var/tmp/spdk-nbd.sock 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 904324 ']' 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.632 02:04:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.632 [2024-07-27 02:04:59.733297] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:31.632 [2024-07-27 02:04:59.733362] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904324 ] 00:05:31.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.632 [2024-07-27 02:04:59.765810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.890 [2024-07-27 02:04:59.797045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.890 [2024-07-27 02:04:59.887278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.890 [2024-07-27 02:04:59.887284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.890 02:04:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.890 02:04:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:31.890 02:04:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.146 Malloc0 00:05:32.146 02:05:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.403 Malloc1 00:05:32.403 02:05:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.403 02:05:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.404 02:05:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.661 /dev/nbd0 00:05:32.661 02:05:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.661 02:05:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.661 1+0 records in 00:05:32.661 1+0 records out 00:05:32.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155283 s, 26.4 MB/s 00:05:32.661 02:05:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.919 02:05:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.919 02:05:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.919 02:05:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.919 02:05:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.919 02:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.919 02:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.919 02:05:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.919 /dev/nbd1 00:05:32.919 02:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.919 02:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.919 02:05:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:32.919 02:05:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.919 02:05:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.919 02:05:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.919 02:05:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.179 1+0 records in 00:05:33.179 1+0 records out 00:05:33.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200041 s, 20.5 MB/s 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.179 02:05:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.179 02:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.179 02:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.179 02:05:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.179 02:05:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.179 02:05:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.435 { 00:05:33.435 "nbd_device": "/dev/nbd0", 00:05:33.435 "bdev_name": "Malloc0" 00:05:33.435 }, 00:05:33.435 { 00:05:33.435 "nbd_device": "/dev/nbd1", 00:05:33.435 "bdev_name": "Malloc1" 00:05:33.435 } 00:05:33.435 ]' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.435 { 00:05:33.435 "nbd_device": "/dev/nbd0", 00:05:33.435 "bdev_name": "Malloc0" 00:05:33.435 }, 00:05:33.435 { 00:05:33.435 "nbd_device": "/dev/nbd1", 00:05:33.435 "bdev_name": "Malloc1" 00:05:33.435 } 00:05:33.435 ]' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.435 /dev/nbd1' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.435 /dev/nbd1' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.435 256+0 records in 00:05:33.435 256+0 records out 00:05:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376608 s, 278 MB/s 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.435 256+0 records in 00:05:33.435 256+0 records out 00:05:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273666 s, 38.3 MB/s 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.435 256+0 records in 00:05:33.435 256+0 records out 00:05:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270427 s, 38.8 MB/s 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.435 02:05:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.692 02:05:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.948 02:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.205 02:05:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.205 02:05:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.464 02:05:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.724 [2024-07-27 02:05:02.810829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.984 [2024-07-27 02:05:02.906341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.984 [2024-07-27 02:05:02.906345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.984 [2024-07-27 02:05:02.966952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.984 [2024-07-27 02:05:02.967015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.524 02:05:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.524 02:05:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.524 spdk_app_start Round 1 00:05:37.524 02:05:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 904324 /var/tmp/spdk-nbd.sock 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 904324 ']' 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.524 02:05:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.835 02:05:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.835 02:05:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.835 02:05:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.092 Malloc0 00:05:38.092 02:05:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.351 Malloc1 00:05:38.351 02:05:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.351 02:05:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.609 /dev/nbd0 00:05:38.609 02:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.609 02:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.609 1+0 records in 00:05:38.609 1+0 records out 00:05:38.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137894 s, 29.7 MB/s 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.609 02:05:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.609 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.609 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.609 02:05:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.867 /dev/nbd1 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.867 1+0 records in 00:05:38.867 1+0 records out 00:05:38.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002034 s, 20.1 MB/s 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.867 02:05:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.867 02:05:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.126 { 00:05:39.126 "nbd_device": "/dev/nbd0", 00:05:39.126 "bdev_name": "Malloc0" 00:05:39.126 }, 00:05:39.126 { 00:05:39.126 "nbd_device": "/dev/nbd1", 00:05:39.126 "bdev_name": "Malloc1" 00:05:39.126 } 00:05:39.126 ]' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.126 { 00:05:39.126 "nbd_device": "/dev/nbd0", 00:05:39.126 "bdev_name": "Malloc0" 00:05:39.126 }, 00:05:39.126 { 00:05:39.126 "nbd_device": "/dev/nbd1", 00:05:39.126 "bdev_name": "Malloc1" 00:05:39.126 } 00:05:39.126 ]' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.126 /dev/nbd1' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.126 /dev/nbd1' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.126 256+0 records in 00:05:39.126 256+0 records out 00:05:39.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516141 s, 203 MB/s 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.126 256+0 records in 00:05:39.126 256+0 records out 00:05:39.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239865 s, 43.7 MB/s 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.126 256+0 records in 00:05:39.126 256+0 records out 00:05:39.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262503 s, 39.9 MB/s 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.126 02:05:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.384 02:05:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.643 02:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.643 02:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.643 02:05:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.643 02:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.643 02:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.901 02:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.901 02:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.901 02:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.901 02:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.159 02:05:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.159 02:05:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.418 02:05:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.678 [2024-07-27 02:05:08.585901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.678 [2024-07-27 02:05:08.676465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.678 [2024-07-27 02:05:08.676470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.678 [2024-07-27 02:05:08.738209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.678 [2024-07-27 02:05:08.738274] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.216 02:05:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.216 02:05:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.216 spdk_app_start Round 2 00:05:43.216 02:05:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 904324 /var/tmp/spdk-nbd.sock 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 904324 ']' 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.216 02:05:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 02:05:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.474 02:05:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:43.474 02:05:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.731 Malloc0 00:05:43.731 02:05:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.990 Malloc1 00:05:43.990 02:05:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.990 02:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.249 /dev/nbd0 00:05:44.249 02:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.249 02:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.249 02:05:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.508 1+0 records in 00:05:44.508 1+0 records out 00:05:44.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194332 s, 21.1 MB/s 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.508 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.508 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.508 02:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.508 /dev/nbd1 00:05:44.508 02:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.508 02:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.508 02:05:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.768 1+0 records in 00:05:44.768 1+0 records out 00:05:44.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219621 s, 18.7 MB/s 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.768 02:05:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.768 02:05:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.768 { 00:05:44.768 "nbd_device": "/dev/nbd0", 00:05:44.768 "bdev_name": "Malloc0" 00:05:44.768 }, 00:05:44.768 { 00:05:44.768 "nbd_device": "/dev/nbd1", 00:05:44.768 "bdev_name": "Malloc1" 00:05:44.768 } 00:05:44.768 ]' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.027 { 00:05:45.027 "nbd_device": "/dev/nbd0", 00:05:45.027 "bdev_name": "Malloc0" 00:05:45.027 }, 00:05:45.027 { 00:05:45.027 "nbd_device": "/dev/nbd1", 00:05:45.027 "bdev_name": "Malloc1" 00:05:45.027 } 00:05:45.027 ]' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.027 /dev/nbd1' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.027 /dev/nbd1' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.027 256+0 records in 00:05:45.027 256+0 records out 00:05:45.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398574 s, 263 MB/s 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.027 02:05:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.027 256+0 records in 00:05:45.027 256+0 records out 00:05:45.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239153 s, 43.8 MB/s 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.027 256+0 records in 00:05:45.027 256+0 records out 00:05:45.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258384 s, 40.6 MB/s 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.027 02:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.284 02:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.542 02:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.800 02:05:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.800 02:05:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.058 02:05:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.316 [2024-07-27 02:05:14.378439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.316 [2024-07-27 02:05:14.466731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.316 [2024-07-27 02:05:14.466735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.575 [2024-07-27 02:05:14.524345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.575 [2024-07-27 02:05:14.524450] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.113 02:05:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 904324 /var/tmp/spdk-nbd.sock 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 904324 ']' 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.113 02:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.373 02:05:17 event.app_repeat -- event/event.sh@39 -- # killprocess 904324 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 904324 ']' 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 904324 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 904324 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 904324' 00:05:49.373 killing process with pid 904324 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 904324 00:05:49.373 02:05:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 904324 00:05:49.632 spdk_app_start is called in Round 0. 00:05:49.632 Shutdown signal received, stop current app iteration 00:05:49.632 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 reinitialization... 00:05:49.632 spdk_app_start is called in Round 1. 00:05:49.632 Shutdown signal received, stop current app iteration 00:05:49.632 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 reinitialization... 00:05:49.632 spdk_app_start is called in Round 2. 00:05:49.632 Shutdown signal received, stop current app iteration 00:05:49.632 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 reinitialization... 00:05:49.632 spdk_app_start is called in Round 3. 00:05:49.632 Shutdown signal received, stop current app iteration 00:05:49.632 02:05:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:49.632 02:05:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:49.632 00:05:49.632 real 0m17.934s 00:05:49.632 user 0m39.080s 00:05:49.632 sys 0m3.198s 00:05:49.632 02:05:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.632 02:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.632 ************************************ 00:05:49.632 END TEST app_repeat 00:05:49.632 ************************************ 00:05:49.632 02:05:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:49.632 02:05:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.632 02:05:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.632 02:05:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.632 02:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.632 ************************************ 00:05:49.632 START TEST cpu_locks 00:05:49.632 ************************************ 00:05:49.632 02:05:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.632 * Looking for test storage... 00:05:49.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.632 02:05:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:49.632 02:05:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:49.632 02:05:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:49.632 02:05:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:49.632 02:05:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.632 02:05:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.632 02:05:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.632 ************************************ 00:05:49.632 START TEST default_locks 00:05:49.632 ************************************ 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=906680 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 906680 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 906680 ']' 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.632 02:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.892 [2024-07-27 02:05:17.822270] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:49.893 [2024-07-27 02:05:17.822347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906680 ] 00:05:49.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.893 [2024-07-27 02:05:17.853683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.893 [2024-07-27 02:05:17.879833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.893 [2024-07-27 02:05:17.965133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.153 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.153 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:50.153 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 906680 00:05:50.153 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 906680 00:05:50.153 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.413 lslocks: write error 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 906680 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 906680 ']' 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 906680 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 906680 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 906680' 00:05:50.413 killing process with pid 906680 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 906680 00:05:50.413 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 906680 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 906680 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 906680 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 906680 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 906680 ']' 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (906680) - No such process 00:05:50.981 ERROR: process (pid: 906680) is no longer running 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.981 00:05:50.981 real 0m1.181s 00:05:50.981 user 0m1.128s 00:05:50.981 sys 0m0.489s 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.981 02:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.982 ************************************ 00:05:50.982 END TEST default_locks 00:05:50.982 ************************************ 00:05:50.982 02:05:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.982 02:05:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.982 02:05:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.982 02:05:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.982 ************************************ 00:05:50.982 START TEST default_locks_via_rpc 00:05:50.982 ************************************ 00:05:50.982 02:05:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=906844 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 906844 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 906844 ']' 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.982 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.982 [2024-07-27 02:05:19.054717] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:50.982 [2024-07-27 02:05:19.054804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906844 ] 00:05:50.982 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.982 [2024-07-27 02:05:19.086490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.982 [2024-07-27 02:05:19.112604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.240 [2024-07-27 02:05:19.201987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 906844 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 906844 00:05:51.498 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 906844 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 906844 ']' 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 906844 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 906844 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 906844' 00:05:51.756 killing process with pid 906844 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 906844 00:05:51.756 02:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 906844 00:05:52.013 00:05:52.013 real 0m1.128s 00:05:52.013 user 0m1.078s 00:05:52.013 sys 0m0.510s 00:05:52.013 02:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.013 02:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.013 ************************************ 00:05:52.013 END TEST default_locks_via_rpc 00:05:52.013 ************************************ 00:05:52.013 02:05:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.013 02:05:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.013 02:05:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.013 02:05:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 ************************************ 00:05:52.271 START TEST non_locking_app_on_locked_coremask 00:05:52.271 ************************************ 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=907006 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 907006 /var/tmp/spdk.sock 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907006 ']' 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.271 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.271 [2024-07-27 02:05:20.230861] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:52.271 [2024-07-27 02:05:20.230935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907006 ] 00:05:52.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.271 [2024-07-27 02:05:20.263342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.271 [2024-07-27 02:05:20.289613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.271 [2024-07-27 02:05:20.378295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=907017 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 907017 /var/tmp/spdk2.sock 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907017 ']' 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.529 02:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.529 [2024-07-27 02:05:20.668469] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:52.529 [2024-07-27 02:05:20.668544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907017 ] 00:05:52.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.788 [2024-07-27 02:05:20.703774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.788 [2024-07-27 02:05:20.761623] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.788 [2024-07-27 02:05:20.761653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.788 [2024-07-27 02:05:20.945377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.758 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.758 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.758 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 907006 00:05:53.758 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 907006 00:05:53.758 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.017 lslocks: write error 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 907006 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 907006 ']' 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 907006 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 907006 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 907006' 00:05:54.017 killing process with pid 907006 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 907006 00:05:54.017 02:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 907006 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 907017 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 907017 ']' 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 907017 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 907017 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 907017' 00:05:54.952 killing process with pid 907017 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 907017 00:05:54.952 02:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 907017 00:05:55.210 00:05:55.210 real 0m3.047s 00:05:55.210 user 0m3.189s 00:05:55.210 sys 0m1.030s 00:05:55.210 02:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.211 02:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 ************************************ 00:05:55.211 END TEST non_locking_app_on_locked_coremask 00:05:55.211 ************************************ 00:05:55.211 02:05:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.211 02:05:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.211 02:05:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.211 02:05:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 ************************************ 00:05:55.211 START TEST locking_app_on_unlocked_coremask 00:05:55.211 ************************************ 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=907325 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 907325 /var/tmp/spdk.sock 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907325 ']' 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.211 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.211 [2024-07-27 02:05:23.332127] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:55.211 [2024-07-27 02:05:23.332223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907325 ] 00:05:55.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.211 [2024-07-27 02:05:23.364848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.469 [2024-07-27 02:05:23.391802] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.469 [2024-07-27 02:05:23.391827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.469 [2024-07-27 02:05:23.480104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=907453 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 907453 /var/tmp/spdk2.sock 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907453 ']' 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.726 02:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.726 [2024-07-27 02:05:23.778483] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:55.726 [2024-07-27 02:05:23.778578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907453 ] 00:05:55.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.726 [2024-07-27 02:05:23.814743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.726 [2024-07-27 02:05:23.864451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.984 [2024-07-27 02:05:24.046927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.919 02:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.919 02:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.919 02:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 907453 00:05:56.919 02:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 907453 00:05:56.919 02:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.178 lslocks: write error 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 907325 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 907325 ']' 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 907325 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 907325 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.178 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 907325' 00:05:57.178 killing process with pid 907325 00:05:57.179 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 907325 00:05:57.179 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 907325 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 907453 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 907453 ']' 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 907453 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.113 02:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 907453 00:05:58.113 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.113 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.113 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 907453' 00:05:58.113 killing process with pid 907453 00:05:58.113 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 907453 00:05:58.113 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 907453 00:05:58.371 00:05:58.371 real 0m3.130s 00:05:58.371 user 0m3.296s 00:05:58.371 sys 0m1.014s 00:05:58.371 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.371 02:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.371 ************************************ 00:05:58.371 END TEST locking_app_on_unlocked_coremask 00:05:58.371 ************************************ 00:05:58.371 02:05:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.372 02:05:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.372 02:05:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.372 02:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.372 ************************************ 00:05:58.372 START TEST locking_app_on_locked_coremask 00:05:58.372 ************************************ 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=907760 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 907760 /var/tmp/spdk.sock 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907760 ']' 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.372 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.372 [2024-07-27 02:05:26.509897] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:58.372 [2024-07-27 02:05:26.509986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907760 ] 00:05:58.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.631 [2024-07-27 02:05:26.542005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.631 [2024-07-27 02:05:26.573632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.631 [2024-07-27 02:05:26.661424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=907884 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 907884 /var/tmp/spdk2.sock 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 907884 /var/tmp/spdk2.sock 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 907884 /var/tmp/spdk2.sock 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 907884 ']' 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.889 02:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.889 [2024-07-27 02:05:26.975917] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:05:58.889 [2024-07-27 02:05:26.976000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907884 ] 00:05:58.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.889 [2024-07-27 02:05:27.009846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.148 [2024-07-27 02:05:27.074405] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 907760 has claimed it. 00:05:59.148 [2024-07-27 02:05:27.074462] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (907884) - No such process 00:05:59.716 ERROR: process (pid: 907884) is no longer running 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 907760 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 907760 00:05:59.716 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.975 lslocks: write error 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 907760 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 907760 ']' 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 907760 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 907760 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 907760' 00:05:59.975 killing process with pid 907760 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 907760 00:05:59.975 02:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 907760 00:06:00.233 00:06:00.233 real 0m1.924s 00:06:00.233 user 0m2.067s 00:06:00.233 sys 0m0.639s 00:06:00.233 02:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.233 02:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 ************************************ 00:06:00.233 END TEST locking_app_on_locked_coremask 00:06:00.233 ************************************ 00:06:00.493 02:05:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.493 02:05:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.493 02:05:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.493 02:05:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.493 ************************************ 00:06:00.493 START TEST locking_overlapped_coremask 00:06:00.493 ************************************ 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=908055 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 908055 /var/tmp/spdk.sock 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 908055 ']' 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.493 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.493 [2024-07-27 02:05:28.481125] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:00.493 [2024-07-27 02:05:28.481233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908055 ] 00:06:00.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.493 [2024-07-27 02:05:28.513895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.493 [2024-07-27 02:05:28.540038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.493 [2024-07-27 02:05:28.630521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.493 [2024-07-27 02:05:28.630585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.493 [2024-07-27 02:05:28.630588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=908067 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 908067 /var/tmp/spdk2.sock 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:00.751 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 908067 /var/tmp/spdk2.sock 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 908067 /var/tmp/spdk2.sock 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 908067 ']' 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.752 02:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.011 [2024-07-27 02:05:28.930835] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:01.011 [2024-07-27 02:05:28.930912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908067 ] 00:06:01.011 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.011 [2024-07-27 02:05:28.967156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.011 [2024-07-27 02:05:29.022466] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 908055 has claimed it. 00:06:01.011 [2024-07-27 02:05:29.022521] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (908067) - No such process 00:06:01.579 ERROR: process (pid: 908067) is no longer running 00:06:01.579 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.579 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:01.579 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:01.579 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 908055 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 908055 ']' 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 908055 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 908055 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 908055' 00:06:01.580 killing process with pid 908055 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 908055 00:06:01.580 02:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 908055 00:06:02.150 00:06:02.150 real 0m1.646s 00:06:02.150 user 0m4.475s 00:06:02.150 sys 0m0.461s 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 ************************************ 00:06:02.150 END TEST locking_overlapped_coremask 00:06:02.150 ************************************ 00:06:02.150 02:05:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.150 02:05:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.150 02:05:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.150 02:05:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 ************************************ 00:06:02.150 START TEST locking_overlapped_coremask_via_rpc 00:06:02.150 ************************************ 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=908290 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 908290 /var/tmp/spdk.sock 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 908290 ']' 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.150 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 [2024-07-27 02:05:30.176532] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:02.150 [2024-07-27 02:05:30.176599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908290 ] 00:06:02.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.150 [2024-07-27 02:05:30.209524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.150 [2024-07-27 02:05:30.237704] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.150 [2024-07-27 02:05:30.237744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.408 [2024-07-27 02:05:30.333273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.408 [2024-07-27 02:05:30.333331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.408 [2024-07-27 02:05:30.333334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=908359 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 908359 /var/tmp/spdk2.sock 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.666 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 908359 ']' 00:06:02.667 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.667 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.667 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.667 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.667 02:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.667 [2024-07-27 02:05:30.630476] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:02.667 [2024-07-27 02:05:30.630560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908359 ] 00:06:02.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.667 [2024-07-27 02:05:30.666129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.667 [2024-07-27 02:05:30.722140] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.667 [2024-07-27 02:05:30.722167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.927 [2024-07-27 02:05:30.898159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.927 [2024-07-27 02:05:30.902108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:02.927 [2024-07-27 02:05:30.902110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.492 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.492 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.492 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.492 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.492 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.493 [2024-07-27 02:05:31.580169] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 908290 has claimed it. 00:06:03.493 request: 00:06:03.493 { 00:06:03.493 "method": "framework_enable_cpumask_locks", 00:06:03.493 "req_id": 1 00:06:03.493 } 00:06:03.493 Got JSON-RPC error response 00:06:03.493 response: 00:06:03.493 { 00:06:03.493 "code": -32603, 00:06:03.493 "message": "Failed to claim CPU core: 2" 00:06:03.493 } 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 908290 /var/tmp/spdk.sock 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 908290 ']' 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.493 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.750 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.750 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.750 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 908359 /var/tmp/spdk2.sock 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 908359 ']' 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.751 02:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.010 00:06:04.010 real 0m1.980s 00:06:04.010 user 0m1.042s 00:06:04.010 sys 0m0.175s 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.010 02:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 ************************************ 00:06:04.010 END TEST locking_overlapped_coremask_via_rpc 00:06:04.010 ************************************ 00:06:04.010 02:05:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.010 02:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 908290 ]] 00:06:04.010 02:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 908290 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 908290 ']' 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 908290 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 908290 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 908290' 00:06:04.010 killing process with pid 908290 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 908290 00:06:04.010 02:05:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 908290 00:06:04.578 02:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 908359 ]] 00:06:04.578 02:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 908359 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 908359 ']' 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 908359 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 908359 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 908359' 00:06:04.578 killing process with pid 908359 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 908359 00:06:04.578 02:05:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 908359 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 908290 ]] 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 908290 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 908290 ']' 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 908290 00:06:04.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (908290) - No such process 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 908290 is not found' 00:06:04.838 Process with pid 908290 is not found 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 908359 ]] 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 908359 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 908359 ']' 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 908359 00:06:04.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (908359) - No such process 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 908359 is not found' 00:06:04.838 Process with pid 908359 is not found 00:06:04.838 02:05:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.838 00:06:04.838 real 0m15.298s 00:06:04.838 user 0m27.081s 00:06:04.838 sys 0m5.229s 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.838 02:05:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.838 ************************************ 00:06:04.838 END TEST cpu_locks 00:06:04.838 ************************************ 00:06:05.098 00:06:05.098 real 0m39.014s 00:06:05.098 user 1m14.941s 00:06:05.098 sys 0m9.206s 00:06:05.098 02:05:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.098 02:05:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 END TEST event 00:06:05.098 ************************************ 00:06:05.098 02:05:33 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:05.098 02:05:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.098 02:05:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.098 02:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 START TEST thread 00:06:05.098 ************************************ 00:06:05.098 02:05:33 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:05.098 * Looking for test storage... 00:06:05.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:05.098 02:05:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.098 02:05:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:05.098 02:05:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.098 02:05:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.098 ************************************ 00:06:05.098 START TEST thread_poller_perf 00:06:05.098 ************************************ 00:06:05.098 02:05:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.098 [2024-07-27 02:05:33.149804] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:05.098 [2024-07-27 02:05:33.149872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908730 ] 00:06:05.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.098 [2024-07-27 02:05:33.182674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.098 [2024-07-27 02:05:33.209754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.357 [2024-07-27 02:05:33.298474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.357 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.295 ====================================== 00:06:06.295 busy:2711786454 (cyc) 00:06:06.295 total_run_count: 298000 00:06:06.295 tsc_hz: 2700000000 (cyc) 00:06:06.295 ====================================== 00:06:06.295 poller_cost: 9099 (cyc), 3370 (nsec) 00:06:06.295 00:06:06.295 real 0m1.254s 00:06:06.295 user 0m1.169s 00:06:06.295 sys 0m0.079s 00:06:06.295 02:05:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.295 02:05:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.295 ************************************ 00:06:06.295 END TEST thread_poller_perf 00:06:06.295 ************************************ 00:06:06.295 02:05:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.295 02:05:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:06.295 02:05:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.295 02:05:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.295 ************************************ 00:06:06.295 START TEST thread_poller_perf 00:06:06.295 ************************************ 00:06:06.295 02:05:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.295 [2024-07-27 02:05:34.452857] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:06.295 [2024-07-27 02:05:34.452927] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908882 ] 00:06:06.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.553 [2024-07-27 02:05:34.483994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.553 [2024-07-27 02:05:34.515658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.553 [2024-07-27 02:05:34.605222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.553 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.935 ====================================== 00:06:07.935 busy:2703021183 (cyc) 00:06:07.935 total_run_count: 3860000 00:06:07.935 tsc_hz: 2700000000 (cyc) 00:06:07.935 ====================================== 00:06:07.935 poller_cost: 700 (cyc), 259 (nsec) 00:06:07.935 00:06:07.935 real 0m1.251s 00:06:07.935 user 0m1.166s 00:06:07.935 sys 0m0.079s 00:06:07.935 02:05:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.935 02:05:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.936 ************************************ 00:06:07.936 END TEST thread_poller_perf 00:06:07.936 ************************************ 00:06:07.936 02:05:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.936 00:06:07.936 real 0m2.655s 00:06:07.936 user 0m2.394s 00:06:07.936 sys 0m0.260s 00:06:07.936 02:05:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.936 02:05:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.936 ************************************ 00:06:07.936 END TEST thread 00:06:07.936 ************************************ 00:06:07.936 02:05:35 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:07.936 02:05:35 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.936 02:05:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.936 02:05:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.936 02:05:35 -- common/autotest_common.sh@10 -- # set +x 00:06:07.936 ************************************ 00:06:07.936 START TEST app_cmdline 00:06:07.936 ************************************ 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.936 * Looking for test storage... 00:06:07.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:07.936 02:05:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.936 02:05:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=909139 00:06:07.936 02:05:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.936 02:05:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 909139 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 909139 ']' 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.936 02:05:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.936 [2024-07-27 02:05:35.867783] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:07.936 [2024-07-27 02:05:35.867865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909139 ] 00:06:07.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.936 [2024-07-27 02:05:35.899263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.936 [2024-07-27 02:05:35.925951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.936 [2024-07-27 02:05:36.010196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.195 02:05:36 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.195 02:05:36 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:08.195 02:05:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:08.453 { 00:06:08.453 "version": "SPDK v24.09-pre git sha1 cac68eec0", 00:06:08.453 "fields": { 00:06:08.453 "major": 24, 00:06:08.453 "minor": 9, 00:06:08.453 "patch": 0, 00:06:08.453 "suffix": "-pre", 00:06:08.453 "commit": "cac68eec0" 00:06:08.453 } 00:06:08.453 } 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.453 02:05:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:08.453 02:05:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.734 request: 00:06:08.734 { 00:06:08.734 "method": "env_dpdk_get_mem_stats", 00:06:08.734 "req_id": 1 00:06:08.734 } 00:06:08.734 Got JSON-RPC error response 00:06:08.734 response: 00:06:08.734 { 00:06:08.734 "code": -32601, 00:06:08.734 "message": "Method not found" 00:06:08.734 } 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.734 02:05:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 909139 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 909139 ']' 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 909139 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 909139 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 909139' 00:06:08.734 killing process with pid 909139 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 909139 00:06:08.734 02:05:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 909139 00:06:09.308 00:06:09.308 real 0m1.459s 00:06:09.308 user 0m1.744s 00:06:09.308 sys 0m0.492s 00:06:09.308 02:05:37 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.308 02:05:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 ************************************ 00:06:09.308 END TEST app_cmdline 00:06:09.308 ************************************ 00:06:09.308 02:05:37 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:09.308 02:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.308 02:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.308 02:05:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 ************************************ 00:06:09.308 START TEST version 00:06:09.308 ************************************ 00:06:09.308 02:05:37 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:09.308 * Looking for test storage... 00:06:09.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.308 02:05:37 version -- app/version.sh@17 -- # get_header_version major 00:06:09.308 02:05:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # cut -f2 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.308 02:05:37 version -- app/version.sh@17 -- # major=24 00:06:09.308 02:05:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.308 02:05:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # cut -f2 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.308 02:05:37 version -- app/version.sh@18 -- # minor=9 00:06:09.308 02:05:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.308 02:05:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # cut -f2 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.308 02:05:37 version -- app/version.sh@19 -- # patch=0 00:06:09.308 02:05:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.308 02:05:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # cut -f2 00:06:09.308 02:05:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.308 02:05:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.308 02:05:37 version -- app/version.sh@22 -- # version=24.9 00:06:09.308 02:05:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.308 02:05:37 version -- app/version.sh@28 -- # version=24.9rc0 00:06:09.308 02:05:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:09.308 02:05:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.308 02:05:37 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:09.308 02:05:37 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:09.308 00:06:09.308 real 0m0.108s 00:06:09.308 user 0m0.062s 00:06:09.308 sys 0m0.067s 00:06:09.308 02:05:37 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.308 02:05:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 ************************************ 00:06:09.308 END TEST version 00:06:09.308 ************************************ 00:06:09.308 02:05:37 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:06:09.308 02:05:37 -- spdk/autotest.sh@207 -- # uname -s 00:06:09.308 02:05:37 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:06:09.308 02:05:37 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:09.308 02:05:37 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:09.308 02:05:37 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@269 -- # timing_exit lib 00:06:09.308 02:05:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.308 02:05:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 02:05:37 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@285 -- # '[' 1 -eq 1 ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@286 -- # export NET_TYPE 00:06:09.308 02:05:37 -- spdk/autotest.sh@289 -- # '[' tcp = rdma ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@292 -- # '[' tcp = tcp ']' 00:06:09.308 02:05:37 -- spdk/autotest.sh@293 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:09.308 02:05:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.308 02:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.308 02:05:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.308 ************************************ 00:06:09.308 START TEST nvmf_tcp 00:06:09.308 ************************************ 00:06:09.308 02:05:37 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:09.568 * Looking for test storage... 00:06:09.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:09.568 02:05:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:09.568 02:05:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:09.568 02:05:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:09.568 02:05:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.568 02:05:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.568 02:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.568 ************************************ 00:06:09.568 START TEST nvmf_target_core 00:06:09.568 ************************************ 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:09.568 * Looking for test storage... 00:06:09.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.568 ************************************ 00:06:09.568 START TEST nvmf_abort 00:06:09.568 ************************************ 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.568 * Looking for test storage... 00:06:09.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.568 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.569 02:05:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:11.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:11.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:11.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:11.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.478 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:11.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:06:11.479 00:06:11.479 --- 10.0.0.2 ping statistics --- 00:06:11.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.479 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:06:11.479 00:06:11.479 --- 10.0.0.1 ping statistics --- 00:06:11.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.479 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.479 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=911117 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 911117 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 911117 ']' 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.739 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 [2024-07-27 02:05:39.686282] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:11.739 [2024-07-27 02:05:39.686385] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.739 [2024-07-27 02:05:39.726103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.739 [2024-07-27 02:05:39.758449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.739 [2024-07-27 02:05:39.852857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:11.739 [2024-07-27 02:05:39.852924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:11.739 [2024-07-27 02:05:39.852941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.739 [2024-07-27 02:05:39.852955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.739 [2024-07-27 02:05:39.852967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:11.739 [2024-07-27 02:05:39.853093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.739 [2024-07-27 02:05:39.853172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.739 [2024-07-27 02:05:39.853175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 [2024-07-27 02:05:39.998180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 Malloc0 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 Delay0 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 [2024-07-27 02:05:40.068523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.000 02:05:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:12.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.259 [2024-07-27 02:05:40.175300] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:14.166 Initializing NVMe Controllers 00:06:14.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:14.167 controller IO queue size 128 less than required 00:06:14.167 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:14.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:14.167 Initialization complete. Launching workers. 00:06:14.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33480 00:06:14.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33541, failed to submit 62 00:06:14.167 success 33484, unsuccess 57, failed 0 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:14.167 rmmod nvme_tcp 00:06:14.167 rmmod nvme_fabrics 00:06:14.167 rmmod nvme_keyring 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 911117 ']' 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 911117 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 911117 ']' 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 911117 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.167 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911117 00:06:14.425 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:14.425 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:14.425 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911117' 00:06:14.425 killing process with pid 911117 00:06:14.425 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 911117 00:06:14.425 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 911117 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.686 02:05:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:16.592 00:06:16.592 real 0m7.045s 00:06:16.592 user 0m10.233s 00:06:16.592 sys 0m2.413s 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:16.592 ************************************ 00:06:16.592 END TEST nvmf_abort 00:06:16.592 ************************************ 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.592 ************************************ 00:06:16.592 START TEST nvmf_ns_hotplug_stress 00:06:16.592 ************************************ 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:16.592 * Looking for test storage... 00:06:16.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.592 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.851 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.852 02:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.761 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:18.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:18.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:18.762 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:18.762 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:06:18.762 00:06:18.762 --- 10.0.0.2 ping statistics --- 00:06:18.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.762 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:06:18.762 00:06:18.762 --- 10.0.0.1 ping statistics --- 00:06:18.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.762 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=913350 00:06:18.762 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 913350 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 913350 ']' 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.763 02:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.021 [2024-07-27 02:05:46.965639] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:06:19.021 [2024-07-27 02:05:46.965708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.021 [2024-07-27 02:05:47.002388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.021 [2024-07-27 02:05:47.034320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.021 [2024-07-27 02:05:47.131721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.021 [2024-07-27 02:05:47.131776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.021 [2024-07-27 02:05:47.131793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.021 [2024-07-27 02:05:47.131807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.021 [2024-07-27 02:05:47.131819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.021 [2024-07-27 02:05:47.131907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.021 [2024-07-27 02:05:47.132028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.022 [2024-07-27 02:05:47.132030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:19.281 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:19.539 [2024-07-27 02:05:47.489320] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.539 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.797 02:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.055 [2024-07-27 02:05:48.004584] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.055 02:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.313 02:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:20.570 Malloc0 00:06:20.571 02:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:20.828 Delay0 00:06:20.828 02:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.085 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:21.343 NULL1 00:06:21.343 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:21.601 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=913765 00:06:21.601 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:21.601 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:21.601 02:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.538 Read completed with error (sct=0, sc=11) 00:06:22.538 02:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.796 02:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:22.796 02:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:23.053 true 00:06:23.053 02:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:23.053 02:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.989 02:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.247 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:24.247 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:24.505 true 00:06:24.505 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:24.505 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.763 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.020 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.020 02:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.306 true 00:06:25.306 02:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:25.306 02:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.873 02:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.130 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:26.130 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:26.388 true 00:06:26.388 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:26.388 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.646 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.904 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:26.904 02:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:27.161 true 00:06:27.161 02:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:27.161 02:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.100 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.357 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:28.357 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:28.615 true 00:06:28.615 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:28.615 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.873 02:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.131 02:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:29.131 02:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:29.389 true 00:06:29.389 02:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:29.389 02:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.327 02:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.584 02:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:30.584 02:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:30.842 true 00:06:30.842 02:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:30.842 02:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.100 02:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.358 02:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:31.358 02:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:31.616 true 00:06:31.616 02:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:31.616 02:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.550 02:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.550 02:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:32.550 02:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:32.807 true 00:06:32.807 02:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:32.807 02:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.064 02:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.322 02:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:33.322 02:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:33.581 true 00:06:33.581 02:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:33.581 02:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.518 02:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.776 02:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:34.776 02:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:35.035 true 00:06:35.035 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:35.035 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.293 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.551 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:35.551 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:35.808 true 00:06:35.808 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:35.808 02:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.066 02:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.324 02:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:36.324 02:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:36.582 true 00:06:36.582 02:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:36.582 02:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.520 02:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.778 02:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:37.778 02:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:38.036 true 00:06:38.036 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:38.036 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.294 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.552 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:38.552 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:38.809 true 00:06:38.809 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:38.809 02:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.745 02:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.013 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:40.013 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:40.299 true 00:06:40.299 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:40.299 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.557 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.815 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:40.815 02:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:41.073 true 00:06:41.073 02:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:41.073 02:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.007 02:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.265 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:42.265 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:42.523 true 00:06:42.523 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:42.523 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.781 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.040 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:43.040 02:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:43.040 true 00:06:43.300 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:43.300 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.300 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.558 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:43.559 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:43.817 true 00:06:43.817 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:43.817 02:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.193 02:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.193 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:45.193 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:45.452 true 00:06:45.452 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:45.452 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.708 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.966 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:45.966 02:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:46.224 true 00:06:46.224 02:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:46.224 02:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.161 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.420 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:47.420 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:47.420 true 00:06:47.678 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:47.678 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.678 02:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.936 02:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:47.936 02:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:48.194 true 00:06:48.194 02:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:48.194 02:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.132 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.390 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:49.390 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:49.647 true 00:06:49.647 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:49.647 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.905 02:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.163 02:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:50.163 02:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:50.421 true 00:06:50.421 02:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:50.421 02:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.357 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.357 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:51.357 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:51.615 Initializing NVMe Controllers 00:06:51.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:51.615 Controller IO queue size 128, less than required. 00:06:51.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:51.615 Controller IO queue size 128, less than required. 00:06:51.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:51.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:51.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:51.615 Initialization complete. Launching workers. 00:06:51.615 ======================================================== 00:06:51.615 Latency(us) 00:06:51.615 Device Information : IOPS MiB/s Average min max 00:06:51.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 940.95 0.46 71681.46 2322.59 1031290.56 00:06:51.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10167.99 4.96 12589.92 3372.80 445145.59 00:06:51.615 ======================================================== 00:06:51.615 Total : 11108.94 5.42 17595.10 2322.59 1031290.56 00:06:51.615 00:06:51.615 true 00:06:51.615 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 913765 00:06:51.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (913765) - No such process 00:06:51.615 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 913765 00:06:51.615 02:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.873 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.131 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:52.131 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:52.131 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:52.131 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.131 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:52.389 null0 00:06:52.389 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.389 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.389 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:52.647 null1 00:06:52.647 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.647 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.647 02:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:52.906 null2 00:06:52.906 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.906 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.906 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:53.163 null3 00:06:53.163 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.163 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.163 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:53.420 null4 00:06:53.420 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.420 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.420 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:53.678 null5 00:06:53.678 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.678 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.678 02:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:53.936 null6 00:06:53.936 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.936 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.936 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:54.194 null7 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.194 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 918324 918325 918326 918329 918331 918333 918335 918337 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.454 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 02:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.008 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.267 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.525 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.525 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.525 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.526 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.526 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.526 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.526 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.526 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.784 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.785 02:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.043 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.300 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.559 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.818 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.076 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.076 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.076 02:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.076 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.077 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.335 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.594 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.852 02:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.110 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.369 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.627 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.885 02:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.144 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.402 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.661 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.661 rmmod nvme_tcp 00:06:59.661 rmmod nvme_fabrics 00:06:59.661 rmmod nvme_keyring 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 913350 ']' 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 913350 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 913350 ']' 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 913350 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 913350 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 913350' 00:06:59.919 killing process with pid 913350 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 913350 00:06:59.919 02:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 913350 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.180 02:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.090 00:07:02.090 real 0m45.432s 00:07:02.090 user 3m27.561s 00:07:02.090 sys 0m16.182s 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.090 ************************************ 00:07:02.090 END TEST nvmf_ns_hotplug_stress 00:07:02.090 ************************************ 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.090 ************************************ 00:07:02.090 START TEST nvmf_delete_subsystem 00:07:02.090 ************************************ 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.090 * Looking for test storage... 00:07:02.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.090 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.091 02:06:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.626 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.627 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.627 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:04.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:07:04.627 00:07:04.627 --- 10.0.0.2 ping statistics --- 00:07:04.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.627 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:07:04.627 00:07:04.627 --- 10.0.0.1 ping statistics --- 00:07:04.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.627 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=921088 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 921088 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 921088 ']' 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.627 [2024-07-27 02:06:32.463700] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:04.627 [2024-07-27 02:06:32.463798] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.627 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.627 [2024-07-27 02:06:32.502783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.627 [2024-07-27 02:06:32.529447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.627 [2024-07-27 02:06:32.615438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.627 [2024-07-27 02:06:32.615500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.627 [2024-07-27 02:06:32.615529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.627 [2024-07-27 02:06:32.615541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.627 [2024-07-27 02:06:32.615551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.627 [2024-07-27 02:06:32.615689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.627 [2024-07-27 02:06:32.615694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.627 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 [2024-07-27 02:06:32.748739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 [2024-07-27 02:06:32.764958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 NULL1 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 Delay0 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.628 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.886 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.886 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=921119 00:07:04.886 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:04.886 02:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:04.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.886 [2024-07-27 02:06:32.839683] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:06.789 02:06:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:06.789 02:06:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.789 02:06:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 [2024-07-27 02:06:34.984006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac9400d330 is same with the state(5) to be set 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 Write completed with error (sct=0, sc=8) 00:07:07.048 starting I/O failed: -6 00:07:07.048 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 starting I/O failed: -6 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 [2024-07-27 02:06:34.984634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513300 is same with the state(5) to be set 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 [2024-07-27 02:06:34.985168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac94000c00 is same with the state(5) to be set 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.049 Read completed with error (sct=0, sc=8) 00:07:07.049 Write completed with error (sct=0, sc=8) 00:07:07.987 [2024-07-27 02:06:35.941748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ab40 is same with the state(5) to be set 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 [2024-07-27 02:06:35.985517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac9400d660 is same with the state(5) to be set 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 [2024-07-27 02:06:35.985839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac9400d000 is same with the state(5) to be set 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 [2024-07-27 02:06:35.986696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x50cd40 is same with the state(5) to be set 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Write completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.987 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Write completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 Read completed with error (sct=0, sc=8) 00:07:07.988 [2024-07-27 02:06:35.986901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x50d100 is same with the state(5) to be set 00:07:07.988 Initializing NVMe Controllers 00:07:07.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.988 Controller IO queue size 128, less than required. 00:07:07.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:07.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:07.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:07.988 Initialization complete. Launching workers. 00:07:07.988 ======================================================== 00:07:07.988 Latency(us) 00:07:07.988 Device Information : IOPS MiB/s Average min max 00:07:07.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.58 0.09 882149.92 604.48 1013524.05 00:07:07.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.15 0.08 899625.81 1172.54 1013807.60 00:07:07.988 ======================================================== 00:07:07.988 Total : 344.72 0.17 890674.13 604.48 1013807.60 00:07:07.988 00:07:07.988 [2024-07-27 02:06:35.987571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52ab40 (9): Bad file descriptor 00:07:07.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:07.988 02:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.988 02:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:07.988 02:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 921119 00:07:07.988 02:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 921119 00:07:08.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (921119) - No such process 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 921119 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 921119 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:08.555 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 921119 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.556 [2024-07-27 02:06:36.510142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=921642 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:08.556 02:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.556 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.556 [2024-07-27 02:06:36.575103] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:09.123 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.123 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:09.123 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.381 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.381 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:09.381 02:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.949 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.949 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:09.949 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.516 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.516 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:10.516 02:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.097 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.097 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:11.097 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.701 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.701 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:11.701 02:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.959 Initializing NVMe Controllers 00:07:11.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.959 Controller IO queue size 128, less than required. 00:07:11.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:11.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:11.959 Initialization complete. Launching workers. 00:07:11.959 ======================================================== 00:07:11.959 Latency(us) 00:07:11.959 Device Information : IOPS MiB/s Average min max 00:07:11.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004194.12 1000274.17 1042167.24 00:07:11.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004731.31 1000292.71 1012149.58 00:07:11.959 ======================================================== 00:07:11.959 Total : 256.00 0.12 1004462.72 1000274.17 1042167.24 00:07:11.959 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 921642 00:07:11.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (921642) - No such process 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 921642 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.959 rmmod nvme_tcp 00:07:11.959 rmmod nvme_fabrics 00:07:11.959 rmmod nvme_keyring 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 921088 ']' 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 921088 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 921088 ']' 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 921088 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.959 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 921088 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 921088' 00:07:12.217 killing process with pid 921088 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 921088 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 921088 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.217 02:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.755 00:07:14.755 real 0m12.237s 00:07:14.755 user 0m27.742s 00:07:14.755 sys 0m3.012s 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.755 ************************************ 00:07:14.755 END TEST nvmf_delete_subsystem 00:07:14.755 ************************************ 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.755 ************************************ 00:07:14.755 START TEST nvmf_host_management 00:07:14.755 ************************************ 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:14.755 * Looking for test storage... 00:07:14.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.755 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.756 02:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.655 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:16.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:16.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:16.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:16.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:07:16.656 00:07:16.656 --- 10.0.0.2 ping statistics --- 00:07:16.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.656 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:07:16.656 00:07:16.656 --- 10.0.0.1 ping statistics --- 00:07:16.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.656 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.656 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=923986 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 923986 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 923986 ']' 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.917 02:06:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.917 [2024-07-27 02:06:44.883661] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:16.917 [2024-07-27 02:06:44.883741] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.917 [2024-07-27 02:06:44.922425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.917 [2024-07-27 02:06:44.949266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.917 [2024-07-27 02:06:45.040100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.917 [2024-07-27 02:06:45.040157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.917 [2024-07-27 02:06:45.040170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.917 [2024-07-27 02:06:45.040181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.917 [2024-07-27 02:06:45.040190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.917 [2024-07-27 02:06:45.040285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.917 [2024-07-27 02:06:45.040411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.917 [2024-07-27 02:06:45.040461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:16.917 [2024-07-27 02:06:45.040463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 [2024-07-27 02:06:45.192209] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 Malloc0 00:07:17.176 [2024-07-27 02:06:45.253242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=924034 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 924034 /var/tmp/bdevperf.sock 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 924034 ']' 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:17.176 { 00:07:17.176 "params": { 00:07:17.176 "name": "Nvme$subsystem", 00:07:17.176 "trtype": "$TEST_TRANSPORT", 00:07:17.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.176 "adrfam": "ipv4", 00:07:17.176 "trsvcid": "$NVMF_PORT", 00:07:17.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.176 "hdgst": ${hdgst:-false}, 00:07:17.176 "ddgst": ${ddgst:-false} 00:07:17.176 }, 00:07:17.176 "method": "bdev_nvme_attach_controller" 00:07:17.176 } 00:07:17.176 EOF 00:07:17.176 )") 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:17.176 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:17.176 "params": { 00:07:17.176 "name": "Nvme0", 00:07:17.176 "trtype": "tcp", 00:07:17.176 "traddr": "10.0.0.2", 00:07:17.176 "adrfam": "ipv4", 00:07:17.176 "trsvcid": "4420", 00:07:17.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.176 "hdgst": false, 00:07:17.176 "ddgst": false 00:07:17.176 }, 00:07:17.176 "method": "bdev_nvme_attach_controller" 00:07:17.176 }' 00:07:17.176 [2024-07-27 02:06:45.333591] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:17.177 [2024-07-27 02:06:45.333675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924034 ] 00:07:17.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.434 [2024-07-27 02:06:45.367033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.434 [2024-07-27 02:06:45.396144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.434 [2024-07-27 02:06:45.484524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.694 Running I/O for 10 seconds... 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:07:17.694 02:06:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=449 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.956 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.957 [2024-07-27 02:06:46.047762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.047999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.048996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abaae0 is same with the state(5) to be set 00:07:17.957 [2024-07-27 02:06:46.049709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.049976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.049990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.050007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.957 [2024-07-27 02:06:46.050021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.957 [2024-07-27 02:06:46.050037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.050983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.050998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.958 [2024-07-27 02:06:46.051278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.958 [2024-07-27 02:06:46.051293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.959 [2024-07-27 02:06:46.051821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:17.959 [2024-07-27 02:06:46.051842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b355f0 is same with the state(5) to be set 00:07:17.959 [2024-07-27 02:06:46.051925] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b355f0 was disconnected and freed. reset controller. 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.959 [2024-07-27 02:06:46.053169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:17.959 task offset: 57344 on job bdev=Nvme0n1 fails 00:07:17.959 00:07:17.959 Latency(us) 00:07:17.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:17.959 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:17.959 Verification LBA range: start 0x0 length 0x400 00:07:17.959 Nvme0n1 : 0.39 1141.16 71.32 163.02 0.00 47727.88 7670.14 40389.59 00:07:17.959 =================================================================================================================== 00:07:17.959 Total : 1141.16 71.32 163.02 0.00 47727.88 7670.14 40389.59 00:07:17.959 [2024-07-27 02:06:46.055283] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.959 [2024-07-27 02:06:46.055314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703b50 (9): Bad file descriptor 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.959 02:06:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:17.959 [2024-07-27 02:06:46.109874] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 924034 00:07:19.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (924034) - No such process 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:19.338 { 00:07:19.338 "params": { 00:07:19.338 "name": "Nvme$subsystem", 00:07:19.338 "trtype": "$TEST_TRANSPORT", 00:07:19.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:19.338 "adrfam": "ipv4", 00:07:19.338 "trsvcid": "$NVMF_PORT", 00:07:19.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:19.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:19.338 "hdgst": ${hdgst:-false}, 00:07:19.338 "ddgst": ${ddgst:-false} 00:07:19.338 }, 00:07:19.338 "method": "bdev_nvme_attach_controller" 00:07:19.338 } 00:07:19.338 EOF 00:07:19.338 )") 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:19.338 02:06:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:19.338 "params": { 00:07:19.338 "name": "Nvme0", 00:07:19.338 "trtype": "tcp", 00:07:19.338 "traddr": "10.0.0.2", 00:07:19.338 "adrfam": "ipv4", 00:07:19.338 "trsvcid": "4420", 00:07:19.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:19.338 "hdgst": false, 00:07:19.338 "ddgst": false 00:07:19.338 }, 00:07:19.338 "method": "bdev_nvme_attach_controller" 00:07:19.338 }' 00:07:19.338 [2024-07-27 02:06:47.109725] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:19.338 [2024-07-27 02:06:47.109812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924309 ] 00:07:19.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.338 [2024-07-27 02:06:47.140953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.338 [2024-07-27 02:06:47.170015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.338 [2024-07-27 02:06:47.259237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.596 Running I/O for 1 seconds... 00:07:20.534 00:07:20.534 Latency(us) 00:07:20.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:20.534 Verification LBA range: start 0x0 length 0x400 00:07:20.534 Nvme0n1 : 1.04 1358.99 84.94 0.00 0.00 46391.96 13107.20 37865.24 00:07:20.534 =================================================================================================================== 00:07:20.534 Total : 1358.99 84.94 0.00 0.00 46391.96 13107.20 37865.24 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.792 rmmod nvme_tcp 00:07:20.792 rmmod nvme_fabrics 00:07:20.792 rmmod nvme_keyring 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 923986 ']' 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 923986 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 923986 ']' 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 923986 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923986 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923986' 00:07:20.792 killing process with pid 923986 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 923986 00:07:20.792 02:06:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 923986 00:07:21.050 [2024-07-27 02:06:49.080144] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.050 02:06:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:23.587 00:07:23.587 real 0m8.694s 00:07:23.587 user 0m18.561s 00:07:23.587 sys 0m2.831s 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.587 ************************************ 00:07:23.587 END TEST nvmf_host_management 00:07:23.587 ************************************ 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.587 ************************************ 00:07:23.587 START TEST nvmf_lvol 00:07:23.587 ************************************ 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.587 * Looking for test storage... 00:07:23.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.587 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.588 02:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.488 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:07:25.489 00:07:25.489 --- 10.0.0.2 ping statistics --- 00:07:25.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.489 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:07:25.489 00:07:25.489 --- 10.0.0.1 ping statistics --- 00:07:25.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.489 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=926397 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 926397 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 926397 ']' 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.489 [2024-07-27 02:06:53.360370] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:25.489 [2024-07-27 02:06:53.360445] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.489 [2024-07-27 02:06:53.397858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.489 [2024-07-27 02:06:53.430538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.489 [2024-07-27 02:06:53.521166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.489 [2024-07-27 02:06:53.521233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.489 [2024-07-27 02:06:53.521249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.489 [2024-07-27 02:06:53.521262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.489 [2024-07-27 02:06:53.521274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.489 [2024-07-27 02:06:53.521359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.489 [2024-07-27 02:06:53.521427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.489 [2024-07-27 02:06:53.521430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.489 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.748 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.748 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.748 [2024-07-27 02:06:53.881497] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.007 02:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:26.268 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:26.268 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:26.526 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:26.526 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:26.785 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:27.042 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3275cb0b-c448-4cb9-bc79-ead18a8474d7 00:07:27.042 02:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3275cb0b-c448-4cb9-bc79-ead18a8474d7 lvol 20 00:07:27.301 02:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=04037e2c-2c4f-49fc-99ac-ceb650549662 00:07:27.301 02:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.559 02:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04037e2c-2c4f-49fc-99ac-ceb650549662 00:07:27.819 02:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.819 [2024-07-27 02:06:55.964113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.078 02:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.078 02:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=926822 00:07:28.078 02:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:28.078 02:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:28.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.318 02:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 04037e2c-2c4f-49fc-99ac-ceb650549662 MY_SNAPSHOT 00:07:29.576 02:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ba0a699f-387a-4f55-8ff0-42cd468108cf 00:07:29.576 02:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 04037e2c-2c4f-49fc-99ac-ceb650549662 30 00:07:29.834 02:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ba0a699f-387a-4f55-8ff0-42cd468108cf MY_CLONE 00:07:30.091 02:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b09cbd50-a4ea-4eb4-a2aa-42bee5052206 00:07:30.091 02:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b09cbd50-a4ea-4eb4-a2aa-42bee5052206 00:07:30.657 02:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 926822 00:07:38.777 Initializing NVMe Controllers 00:07:38.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:38.777 Controller IO queue size 128, less than required. 00:07:38.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:38.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:38.777 Initialization complete. Launching workers. 00:07:38.777 ======================================================== 00:07:38.777 Latency(us) 00:07:38.777 Device Information : IOPS MiB/s Average min max 00:07:38.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10707.70 41.83 11962.97 2278.20 82363.48 00:07:38.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10706.70 41.82 11958.16 2123.55 80122.27 00:07:38.777 ======================================================== 00:07:38.777 Total : 21414.40 83.65 11960.57 2123.55 82363.48 00:07:38.777 00:07:38.777 02:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.777 02:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04037e2c-2c4f-49fc-99ac-ceb650549662 00:07:39.035 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3275cb0b-c448-4cb9-bc79-ead18a8474d7 00:07:39.292 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:39.292 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.293 rmmod nvme_tcp 00:07:39.293 rmmod nvme_fabrics 00:07:39.293 rmmod nvme_keyring 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 926397 ']' 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 926397 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 926397 ']' 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 926397 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.293 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926397 00:07:39.552 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.552 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.552 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926397' 00:07:39.552 killing process with pid 926397 00:07:39.552 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 926397 00:07:39.552 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 926397 00:07:39.811 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.811 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.811 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.811 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.812 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.812 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.812 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.812 02:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.721 00:07:41.721 real 0m18.596s 00:07:41.721 user 1m3.440s 00:07:41.721 sys 0m5.598s 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.721 ************************************ 00:07:41.721 END TEST nvmf_lvol 00:07:41.721 ************************************ 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.721 ************************************ 00:07:41.721 START TEST nvmf_lvs_grow 00:07:41.721 ************************************ 00:07:41.721 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:41.980 * Looking for test storage... 00:07:41.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.981 02:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.887 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:43.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:43.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:43.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:43.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:07:43.888 00:07:43.888 --- 10.0.0.2 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:43.888 00:07:43.888 --- 10.0.0.1 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.888 02:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=930096 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 930096 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 930096 ']' 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.888 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.889 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.889 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 [2024-07-27 02:07:12.052568] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:44.149 [2024-07-27 02:07:12.052666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.149 [2024-07-27 02:07:12.091298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.149 [2024-07-27 02:07:12.117878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.149 [2024-07-27 02:07:12.205705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.149 [2024-07-27 02:07:12.205765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.149 [2024-07-27 02:07:12.205786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.149 [2024-07-27 02:07:12.205803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.149 [2024-07-27 02:07:12.205817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.149 [2024-07-27 02:07:12.205873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.408 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:44.667 [2024-07-27 02:07:12.587480] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.667 ************************************ 00:07:44.667 START TEST lvs_grow_clean 00:07:44.667 ************************************ 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.667 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:44.927 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:44.927 02:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:45.187 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=77a09adc-17ab-4381-b854-907e15f16f0b 00:07:45.187 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:45.187 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:45.445 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:45.445 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:45.445 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77a09adc-17ab-4381-b854-907e15f16f0b lvol 150 00:07:45.702 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=943dc872-a399-47f3-9cf0-7d24b811f234 00:07:45.702 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.702 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:45.961 [2024-07-27 02:07:13.895287] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:45.961 [2024-07-27 02:07:13.895389] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:45.961 true 00:07:45.961 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:45.961 02:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:46.220 02:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:46.220 02:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.480 02:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 943dc872-a399-47f3-9cf0-7d24b811f234 00:07:46.739 02:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.739 [2024-07-27 02:07:14.890368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.997 02:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=930535 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 930535 /var/tmp/bdevperf.sock 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 930535 ']' 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:46.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.997 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.257 [2024-07-27 02:07:15.194166] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:07:47.257 [2024-07-27 02:07:15.194241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930535 ] 00:07:47.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.257 [2024-07-27 02:07:15.225288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.257 [2024-07-27 02:07:15.256382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.257 [2024-07-27 02:07:15.347031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.515 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.515 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:47.516 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:47.773 Nvme0n1 00:07:47.773 02:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:48.031 [ 00:07:48.031 { 00:07:48.031 "name": "Nvme0n1", 00:07:48.031 "aliases": [ 00:07:48.031 "943dc872-a399-47f3-9cf0-7d24b811f234" 00:07:48.031 ], 00:07:48.031 "product_name": "NVMe disk", 00:07:48.031 "block_size": 4096, 00:07:48.031 "num_blocks": 38912, 00:07:48.031 "uuid": "943dc872-a399-47f3-9cf0-7d24b811f234", 00:07:48.031 "assigned_rate_limits": { 00:07:48.031 "rw_ios_per_sec": 0, 00:07:48.032 "rw_mbytes_per_sec": 0, 00:07:48.032 "r_mbytes_per_sec": 0, 00:07:48.032 "w_mbytes_per_sec": 0 00:07:48.032 }, 00:07:48.032 "claimed": false, 00:07:48.032 "zoned": false, 00:07:48.032 "supported_io_types": { 00:07:48.032 "read": true, 00:07:48.032 "write": true, 00:07:48.032 "unmap": true, 00:07:48.032 "flush": true, 00:07:48.032 "reset": true, 00:07:48.032 "nvme_admin": true, 00:07:48.032 "nvme_io": true, 00:07:48.032 "nvme_io_md": false, 00:07:48.032 "write_zeroes": true, 00:07:48.032 "zcopy": false, 00:07:48.032 "get_zone_info": false, 00:07:48.032 "zone_management": false, 00:07:48.032 "zone_append": false, 00:07:48.032 "compare": true, 00:07:48.032 "compare_and_write": true, 00:07:48.032 "abort": true, 00:07:48.032 "seek_hole": false, 00:07:48.032 "seek_data": false, 00:07:48.032 "copy": true, 00:07:48.032 "nvme_iov_md": false 00:07:48.032 }, 00:07:48.032 "memory_domains": [ 00:07:48.032 { 00:07:48.032 "dma_device_id": "system", 00:07:48.032 "dma_device_type": 1 00:07:48.032 } 00:07:48.032 ], 00:07:48.032 "driver_specific": { 00:07:48.032 "nvme": [ 00:07:48.032 { 00:07:48.032 "trid": { 00:07:48.032 "trtype": "TCP", 00:07:48.032 "adrfam": "IPv4", 00:07:48.032 "traddr": "10.0.0.2", 00:07:48.032 "trsvcid": "4420", 00:07:48.032 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:48.032 }, 00:07:48.032 "ctrlr_data": { 00:07:48.032 "cntlid": 1, 00:07:48.032 "vendor_id": "0x8086", 00:07:48.032 "model_number": "SPDK bdev Controller", 00:07:48.032 "serial_number": "SPDK0", 00:07:48.032 "firmware_revision": "24.09", 00:07:48.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.032 "oacs": { 00:07:48.032 "security": 0, 00:07:48.032 "format": 0, 00:07:48.032 "firmware": 0, 00:07:48.032 "ns_manage": 0 00:07:48.032 }, 00:07:48.032 "multi_ctrlr": true, 00:07:48.032 "ana_reporting": false 00:07:48.032 }, 00:07:48.032 "vs": { 00:07:48.032 "nvme_version": "1.3" 00:07:48.032 }, 00:07:48.032 "ns_data": { 00:07:48.032 "id": 1, 00:07:48.032 "can_share": true 00:07:48.032 } 00:07:48.032 } 00:07:48.032 ], 00:07:48.032 "mp_policy": "active_passive" 00:07:48.032 } 00:07:48.032 } 00:07:48.032 ] 00:07:48.032 02:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=930670 00:07:48.032 02:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:48.032 02:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.291 Running I/O for 10 seconds... 00:07:49.265 Latency(us) 00:07:49.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.265 Nvme0n1 : 1.00 14088.00 55.03 0.00 0.00 0.00 0.00 0.00 00:07:49.265 =================================================================================================================== 00:07:49.265 Total : 14088.00 55.03 0.00 0.00 0.00 0.00 0.00 00:07:49.265 00:07:50.205 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:50.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.205 Nvme0n1 : 2.00 14307.50 55.89 0.00 0.00 0.00 0.00 0.00 00:07:50.205 =================================================================================================================== 00:07:50.205 Total : 14307.50 55.89 0.00 0.00 0.00 0.00 0.00 00:07:50.205 00:07:50.463 true 00:07:50.463 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:50.463 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:50.723 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:50.723 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:50.723 02:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 930670 00:07:51.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.292 Nvme0n1 : 3.00 14381.33 56.18 0.00 0.00 0.00 0.00 0.00 00:07:51.292 =================================================================================================================== 00:07:51.292 Total : 14381.33 56.18 0.00 0.00 0.00 0.00 0.00 00:07:51.292 00:07:52.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.232 Nvme0n1 : 4.00 14434.00 56.38 0.00 0.00 0.00 0.00 0.00 00:07:52.232 =================================================================================================================== 00:07:52.232 Total : 14434.00 56.38 0.00 0.00 0.00 0.00 0.00 00:07:52.232 00:07:53.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.167 Nvme0n1 : 5.00 14478.20 56.56 0.00 0.00 0.00 0.00 0.00 00:07:53.167 =================================================================================================================== 00:07:53.167 Total : 14478.20 56.56 0.00 0.00 0.00 0.00 0.00 00:07:53.167 00:07:54.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.541 Nvme0n1 : 6.00 14518.67 56.71 0.00 0.00 0.00 0.00 0.00 00:07:54.541 =================================================================================================================== 00:07:54.541 Total : 14518.67 56.71 0.00 0.00 0.00 0.00 0.00 00:07:54.541 00:07:55.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.475 Nvme0n1 : 7.00 14556.29 56.86 0.00 0.00 0.00 0.00 0.00 00:07:55.475 =================================================================================================================== 00:07:55.475 Total : 14556.29 56.86 0.00 0.00 0.00 0.00 0.00 00:07:55.475 00:07:56.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.411 Nvme0n1 : 8.00 14584.88 56.97 0.00 0.00 0.00 0.00 0.00 00:07:56.411 =================================================================================================================== 00:07:56.411 Total : 14584.88 56.97 0.00 0.00 0.00 0.00 0.00 00:07:56.411 00:07:57.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.351 Nvme0n1 : 9.00 14606.89 57.06 0.00 0.00 0.00 0.00 0.00 00:07:57.351 =================================================================================================================== 00:07:57.351 Total : 14606.89 57.06 0.00 0.00 0.00 0.00 0.00 00:07:57.351 00:07:58.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.288 Nvme0n1 : 10.00 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:07:58.288 =================================================================================================================== 00:07:58.288 Total : 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:07:58.288 00:07:58.288 00:07:58.288 Latency(us) 00:07:58.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.288 Nvme0n1 : 10.01 14634.02 57.16 0.00 0.00 8740.61 5242.88 20000.62 00:07:58.288 =================================================================================================================== 00:07:58.288 Total : 14634.02 57.16 0.00 0.00 8740.61 5242.88 20000.62 00:07:58.288 0 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 930535 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 930535 ']' 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 930535 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930535 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930535' 00:07:58.288 killing process with pid 930535 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 930535 00:07:58.288 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.288 00:07:58.288 Latency(us) 00:07:58.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.288 =================================================================================================================== 00:07:58.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.288 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 930535 00:07:58.546 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.803 02:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.062 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:59.062 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:59.321 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:59.321 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:59.321 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.582 [2024-07-27 02:07:27.572827] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.582 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:07:59.842 request: 00:07:59.842 { 00:07:59.842 "uuid": "77a09adc-17ab-4381-b854-907e15f16f0b", 00:07:59.842 "method": "bdev_lvol_get_lvstores", 00:07:59.842 "req_id": 1 00:07:59.842 } 00:07:59.842 Got JSON-RPC error response 00:07:59.842 response: 00:07:59.842 { 00:07:59.842 "code": -19, 00:07:59.842 "message": "No such device" 00:07:59.842 } 00:07:59.842 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:59.842 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.842 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.842 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.842 02:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.102 aio_bdev 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 943dc872-a399-47f3-9cf0-7d24b811f234 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=943dc872-a399-47f3-9cf0-7d24b811f234 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.102 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.361 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 943dc872-a399-47f3-9cf0-7d24b811f234 -t 2000 00:08:00.619 [ 00:08:00.619 { 00:08:00.619 "name": "943dc872-a399-47f3-9cf0-7d24b811f234", 00:08:00.619 "aliases": [ 00:08:00.619 "lvs/lvol" 00:08:00.619 ], 00:08:00.619 "product_name": "Logical Volume", 00:08:00.619 "block_size": 4096, 00:08:00.619 "num_blocks": 38912, 00:08:00.619 "uuid": "943dc872-a399-47f3-9cf0-7d24b811f234", 00:08:00.619 "assigned_rate_limits": { 00:08:00.619 "rw_ios_per_sec": 0, 00:08:00.619 "rw_mbytes_per_sec": 0, 00:08:00.619 "r_mbytes_per_sec": 0, 00:08:00.619 "w_mbytes_per_sec": 0 00:08:00.619 }, 00:08:00.619 "claimed": false, 00:08:00.619 "zoned": false, 00:08:00.619 "supported_io_types": { 00:08:00.619 "read": true, 00:08:00.619 "write": true, 00:08:00.619 "unmap": true, 00:08:00.619 "flush": false, 00:08:00.619 "reset": true, 00:08:00.619 "nvme_admin": false, 00:08:00.619 "nvme_io": false, 00:08:00.619 "nvme_io_md": false, 00:08:00.619 "write_zeroes": true, 00:08:00.619 "zcopy": false, 00:08:00.619 "get_zone_info": false, 00:08:00.619 "zone_management": false, 00:08:00.619 "zone_append": false, 00:08:00.619 "compare": false, 00:08:00.619 "compare_and_write": false, 00:08:00.619 "abort": false, 00:08:00.619 "seek_hole": true, 00:08:00.619 "seek_data": true, 00:08:00.619 "copy": false, 00:08:00.619 "nvme_iov_md": false 00:08:00.619 }, 00:08:00.619 "driver_specific": { 00:08:00.619 "lvol": { 00:08:00.619 "lvol_store_uuid": "77a09adc-17ab-4381-b854-907e15f16f0b", 00:08:00.619 "base_bdev": "aio_bdev", 00:08:00.619 "thin_provision": false, 00:08:00.619 "num_allocated_clusters": 38, 00:08:00.619 "snapshot": false, 00:08:00.619 "clone": false, 00:08:00.619 "esnap_clone": false 00:08:00.619 } 00:08:00.619 } 00:08:00.619 } 00:08:00.619 ] 00:08:00.619 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:00.619 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:08:00.619 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:00.879 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:00.879 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:08:00.879 02:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:01.138 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:01.138 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 943dc872-a399-47f3-9cf0-7d24b811f234 00:08:01.396 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77a09adc-17ab-4381-b854-907e15f16f0b 00:08:01.655 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.915 00:08:01.915 real 0m17.246s 00:08:01.915 user 0m16.420s 00:08:01.915 sys 0m1.977s 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:01.915 ************************************ 00:08:01.915 END TEST lvs_grow_clean 00:08:01.915 ************************************ 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.915 ************************************ 00:08:01.915 START TEST lvs_grow_dirty 00:08:01.915 ************************************ 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.915 02:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.175 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:02.175 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:02.434 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:02.434 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:02.434 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:02.693 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:02.693 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:02.693 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 lvol 150 00:08:02.952 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:02.952 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.952 02:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.212 [2024-07-27 02:07:31.191266] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.212 [2024-07-27 02:07:31.191354] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.212 true 00:08:03.212 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:03.212 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:03.472 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:03.472 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.732 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:03.991 02:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.258 [2024-07-27 02:07:32.182352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.258 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=932613 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 932613 /var/tmp/bdevperf.sock 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 932613 ']' 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.541 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.541 [2024-07-27 02:07:32.483837] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:04.541 [2024-07-27 02:07:32.483910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932613 ] 00:08:04.541 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.541 [2024-07-27 02:07:32.515957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.541 [2024-07-27 02:07:32.545605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.541 [2024-07-27 02:07:32.635605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.800 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.800 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:04.800 02:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.057 Nvme0n1 00:08:05.057 02:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.315 [ 00:08:05.315 { 00:08:05.315 "name": "Nvme0n1", 00:08:05.315 "aliases": [ 00:08:05.315 "67090c4f-235e-40dc-bda2-89cfd6530a59" 00:08:05.315 ], 00:08:05.315 "product_name": "NVMe disk", 00:08:05.315 "block_size": 4096, 00:08:05.315 "num_blocks": 38912, 00:08:05.315 "uuid": "67090c4f-235e-40dc-bda2-89cfd6530a59", 00:08:05.315 "assigned_rate_limits": { 00:08:05.315 "rw_ios_per_sec": 0, 00:08:05.315 "rw_mbytes_per_sec": 0, 00:08:05.315 "r_mbytes_per_sec": 0, 00:08:05.315 "w_mbytes_per_sec": 0 00:08:05.315 }, 00:08:05.315 "claimed": false, 00:08:05.315 "zoned": false, 00:08:05.315 "supported_io_types": { 00:08:05.315 "read": true, 00:08:05.315 "write": true, 00:08:05.315 "unmap": true, 00:08:05.315 "flush": true, 00:08:05.315 "reset": true, 00:08:05.315 "nvme_admin": true, 00:08:05.315 "nvme_io": true, 00:08:05.315 "nvme_io_md": false, 00:08:05.315 "write_zeroes": true, 00:08:05.315 "zcopy": false, 00:08:05.315 "get_zone_info": false, 00:08:05.315 "zone_management": false, 00:08:05.315 "zone_append": false, 00:08:05.315 "compare": true, 00:08:05.315 "compare_and_write": true, 00:08:05.315 "abort": true, 00:08:05.315 "seek_hole": false, 00:08:05.315 "seek_data": false, 00:08:05.315 "copy": true, 00:08:05.315 "nvme_iov_md": false 00:08:05.315 }, 00:08:05.315 "memory_domains": [ 00:08:05.315 { 00:08:05.315 "dma_device_id": "system", 00:08:05.315 "dma_device_type": 1 00:08:05.315 } 00:08:05.315 ], 00:08:05.315 "driver_specific": { 00:08:05.315 "nvme": [ 00:08:05.315 { 00:08:05.315 "trid": { 00:08:05.315 "trtype": "TCP", 00:08:05.315 "adrfam": "IPv4", 00:08:05.315 "traddr": "10.0.0.2", 00:08:05.315 "trsvcid": "4420", 00:08:05.315 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:05.315 }, 00:08:05.315 "ctrlr_data": { 00:08:05.315 "cntlid": 1, 00:08:05.315 "vendor_id": "0x8086", 00:08:05.315 "model_number": "SPDK bdev Controller", 00:08:05.315 "serial_number": "SPDK0", 00:08:05.315 "firmware_revision": "24.09", 00:08:05.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.315 "oacs": { 00:08:05.315 "security": 0, 00:08:05.316 "format": 0, 00:08:05.316 "firmware": 0, 00:08:05.316 "ns_manage": 0 00:08:05.316 }, 00:08:05.316 "multi_ctrlr": true, 00:08:05.316 "ana_reporting": false 00:08:05.316 }, 00:08:05.316 "vs": { 00:08:05.316 "nvme_version": "1.3" 00:08:05.316 }, 00:08:05.316 "ns_data": { 00:08:05.316 "id": 1, 00:08:05.316 "can_share": true 00:08:05.316 } 00:08:05.316 } 00:08:05.316 ], 00:08:05.316 "mp_policy": "active_passive" 00:08:05.316 } 00:08:05.316 } 00:08:05.316 ] 00:08:05.316 02:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=932750 00:08:05.316 02:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.316 02:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.574 Running I/O for 10 seconds... 00:08:06.510 Latency(us) 00:08:06.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.510 Nvme0n1 : 1.00 13611.00 53.17 0.00 0.00 0.00 0.00 0.00 00:08:06.510 =================================================================================================================== 00:08:06.510 Total : 13611.00 53.17 0.00 0.00 0.00 0.00 0.00 00:08:06.510 00:08:07.445 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:07.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.446 Nvme0n1 : 2.00 13673.50 53.41 0.00 0.00 0.00 0.00 0.00 00:08:07.446 =================================================================================================================== 00:08:07.446 Total : 13673.50 53.41 0.00 0.00 0.00 0.00 0.00 00:08:07.446 00:08:07.703 true 00:08:07.703 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:07.703 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:07.962 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:07.962 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:07.962 02:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 932750 00:08:08.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.530 Nvme0n1 : 3.00 13771.67 53.80 0.00 0.00 0.00 0.00 0.00 00:08:08.530 =================================================================================================================== 00:08:08.530 Total : 13771.67 53.80 0.00 0.00 0.00 0.00 0.00 00:08:08.530 00:08:09.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.471 Nvme0n1 : 4.00 13844.75 54.08 0.00 0.00 0.00 0.00 0.00 00:08:09.471 =================================================================================================================== 00:08:09.471 Total : 13844.75 54.08 0.00 0.00 0.00 0.00 0.00 00:08:09.471 00:08:10.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.847 Nvme0n1 : 5.00 13890.20 54.26 0.00 0.00 0.00 0.00 0.00 00:08:10.847 =================================================================================================================== 00:08:10.847 Total : 13890.20 54.26 0.00 0.00 0.00 0.00 0.00 00:08:10.847 00:08:11.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.416 Nvme0n1 : 6.00 13936.50 54.44 0.00 0.00 0.00 0.00 0.00 00:08:11.416 =================================================================================================================== 00:08:11.416 Total : 13936.50 54.44 0.00 0.00 0.00 0.00 0.00 00:08:11.416 00:08:12.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.797 Nvme0n1 : 7.00 13963.86 54.55 0.00 0.00 0.00 0.00 0.00 00:08:12.798 =================================================================================================================== 00:08:12.798 Total : 13963.86 54.55 0.00 0.00 0.00 0.00 0.00 00:08:12.798 00:08:13.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.735 Nvme0n1 : 8.00 13986.38 54.63 0.00 0.00 0.00 0.00 0.00 00:08:13.735 =================================================================================================================== 00:08:13.735 Total : 13986.38 54.63 0.00 0.00 0.00 0.00 0.00 00:08:13.735 00:08:14.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.672 Nvme0n1 : 9.00 14012.78 54.74 0.00 0.00 0.00 0.00 0.00 00:08:14.672 =================================================================================================================== 00:08:14.672 Total : 14012.78 54.74 0.00 0.00 0.00 0.00 0.00 00:08:14.672 00:08:15.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.605 Nvme0n1 : 10.00 14023.50 54.78 0.00 0.00 0.00 0.00 0.00 00:08:15.605 =================================================================================================================== 00:08:15.605 Total : 14023.50 54.78 0.00 0.00 0.00 0.00 0.00 00:08:15.605 00:08:15.605 00:08:15.605 Latency(us) 00:08:15.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.605 Nvme0n1 : 10.01 14023.31 54.78 0.00 0.00 9119.82 6262.33 13786.83 00:08:15.605 =================================================================================================================== 00:08:15.605 Total : 14023.31 54.78 0.00 0.00 9119.82 6262.33 13786.83 00:08:15.605 0 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 932613 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 932613 ']' 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 932613 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 932613 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 932613' 00:08:15.605 killing process with pid 932613 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 932613 00:08:15.605 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.605 00:08:15.605 Latency(us) 00:08:15.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.605 =================================================================================================================== 00:08:15.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.605 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 932613 00:08:15.862 02:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.119 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.377 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:16.377 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 930096 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 930096 00:08:16.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 930096 Killed "${NVMF_APP[@]}" "$@" 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=934081 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 934081 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 934081 ']' 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.636 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.636 [2024-07-27 02:07:44.715771] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:16.636 [2024-07-27 02:07:44.715856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.636 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.636 [2024-07-27 02:07:44.755179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.636 [2024-07-27 02:07:44.781416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.894 [2024-07-27 02:07:44.867689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.894 [2024-07-27 02:07:44.867746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.894 [2024-07-27 02:07:44.867775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.894 [2024-07-27 02:07:44.867792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.894 [2024-07-27 02:07:44.867805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.894 [2024-07-27 02:07:44.867837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.894 02:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.153 [2024-07-27 02:07:45.223501] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:17.153 [2024-07-27 02:07:45.223658] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:17.153 [2024-07-27 02:07:45.223732] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.153 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.411 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67090c4f-235e-40dc-bda2-89cfd6530a59 -t 2000 00:08:17.670 [ 00:08:17.670 { 00:08:17.670 "name": "67090c4f-235e-40dc-bda2-89cfd6530a59", 00:08:17.670 "aliases": [ 00:08:17.670 "lvs/lvol" 00:08:17.670 ], 00:08:17.670 "product_name": "Logical Volume", 00:08:17.670 "block_size": 4096, 00:08:17.670 "num_blocks": 38912, 00:08:17.670 "uuid": "67090c4f-235e-40dc-bda2-89cfd6530a59", 00:08:17.670 "assigned_rate_limits": { 00:08:17.670 "rw_ios_per_sec": 0, 00:08:17.670 "rw_mbytes_per_sec": 0, 00:08:17.670 "r_mbytes_per_sec": 0, 00:08:17.670 "w_mbytes_per_sec": 0 00:08:17.670 }, 00:08:17.670 "claimed": false, 00:08:17.670 "zoned": false, 00:08:17.670 "supported_io_types": { 00:08:17.670 "read": true, 00:08:17.670 "write": true, 00:08:17.670 "unmap": true, 00:08:17.670 "flush": false, 00:08:17.670 "reset": true, 00:08:17.670 "nvme_admin": false, 00:08:17.670 "nvme_io": false, 00:08:17.670 "nvme_io_md": false, 00:08:17.670 "write_zeroes": true, 00:08:17.670 "zcopy": false, 00:08:17.670 "get_zone_info": false, 00:08:17.670 "zone_management": false, 00:08:17.670 "zone_append": false, 00:08:17.670 "compare": false, 00:08:17.670 "compare_and_write": false, 00:08:17.670 "abort": false, 00:08:17.670 "seek_hole": true, 00:08:17.670 "seek_data": true, 00:08:17.670 "copy": false, 00:08:17.670 "nvme_iov_md": false 00:08:17.670 }, 00:08:17.670 "driver_specific": { 00:08:17.670 "lvol": { 00:08:17.670 "lvol_store_uuid": "67c0cd63-5243-436d-a5d6-637dc0f7a847", 00:08:17.670 "base_bdev": "aio_bdev", 00:08:17.670 "thin_provision": false, 00:08:17.670 "num_allocated_clusters": 38, 00:08:17.670 "snapshot": false, 00:08:17.670 "clone": false, 00:08:17.670 "esnap_clone": false 00:08:17.670 } 00:08:17.670 } 00:08:17.670 } 00:08:17.670 ] 00:08:17.670 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:17.670 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:17.670 02:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:17.928 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:17.928 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:17.928 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:18.186 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:18.186 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.444 [2024-07-27 02:07:46.508624] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.444 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:18.704 request: 00:08:18.704 { 00:08:18.704 "uuid": "67c0cd63-5243-436d-a5d6-637dc0f7a847", 00:08:18.704 "method": "bdev_lvol_get_lvstores", 00:08:18.704 "req_id": 1 00:08:18.704 } 00:08:18.704 Got JSON-RPC error response 00:08:18.704 response: 00:08:18.704 { 00:08:18.704 "code": -19, 00:08:18.704 "message": "No such device" 00:08:18.704 } 00:08:18.704 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:18.704 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.704 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.704 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.704 02:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.962 aio_bdev 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.962 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:19.220 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67090c4f-235e-40dc-bda2-89cfd6530a59 -t 2000 00:08:19.478 [ 00:08:19.479 { 00:08:19.479 "name": "67090c4f-235e-40dc-bda2-89cfd6530a59", 00:08:19.479 "aliases": [ 00:08:19.479 "lvs/lvol" 00:08:19.479 ], 00:08:19.479 "product_name": "Logical Volume", 00:08:19.479 "block_size": 4096, 00:08:19.479 "num_blocks": 38912, 00:08:19.479 "uuid": "67090c4f-235e-40dc-bda2-89cfd6530a59", 00:08:19.479 "assigned_rate_limits": { 00:08:19.479 "rw_ios_per_sec": 0, 00:08:19.479 "rw_mbytes_per_sec": 0, 00:08:19.479 "r_mbytes_per_sec": 0, 00:08:19.479 "w_mbytes_per_sec": 0 00:08:19.479 }, 00:08:19.479 "claimed": false, 00:08:19.479 "zoned": false, 00:08:19.479 "supported_io_types": { 00:08:19.479 "read": true, 00:08:19.479 "write": true, 00:08:19.479 "unmap": true, 00:08:19.479 "flush": false, 00:08:19.479 "reset": true, 00:08:19.479 "nvme_admin": false, 00:08:19.479 "nvme_io": false, 00:08:19.479 "nvme_io_md": false, 00:08:19.479 "write_zeroes": true, 00:08:19.479 "zcopy": false, 00:08:19.479 "get_zone_info": false, 00:08:19.479 "zone_management": false, 00:08:19.479 "zone_append": false, 00:08:19.479 "compare": false, 00:08:19.479 "compare_and_write": false, 00:08:19.479 "abort": false, 00:08:19.479 "seek_hole": true, 00:08:19.479 "seek_data": true, 00:08:19.479 "copy": false, 00:08:19.479 "nvme_iov_md": false 00:08:19.479 }, 00:08:19.479 "driver_specific": { 00:08:19.479 "lvol": { 00:08:19.479 "lvol_store_uuid": "67c0cd63-5243-436d-a5d6-637dc0f7a847", 00:08:19.479 "base_bdev": "aio_bdev", 00:08:19.479 "thin_provision": false, 00:08:19.479 "num_allocated_clusters": 38, 00:08:19.479 "snapshot": false, 00:08:19.479 "clone": false, 00:08:19.479 "esnap_clone": false 00:08:19.479 } 00:08:19.479 } 00:08:19.479 } 00:08:19.479 ] 00:08:19.479 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:19.479 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:19.479 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.736 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.736 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:19.737 02:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:20.029 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:20.029 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67090c4f-235e-40dc-bda2-89cfd6530a59 00:08:20.289 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67c0cd63-5243-436d-a5d6-637dc0f7a847 00:08:20.549 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.808 00:08:20.808 real 0m18.877s 00:08:20.808 user 0m42.669s 00:08:20.808 sys 0m6.894s 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.808 ************************************ 00:08:20.808 END TEST lvs_grow_dirty 00:08:20.808 ************************************ 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:20.808 nvmf_trace.0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.808 rmmod nvme_tcp 00:08:20.808 rmmod nvme_fabrics 00:08:20.808 rmmod nvme_keyring 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 934081 ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 934081 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 934081 ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 934081 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934081 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934081' 00:08:20.808 killing process with pid 934081 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 934081 00:08:20.808 02:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 934081 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.068 02:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.607 00:08:23.607 real 0m41.364s 00:08:23.607 user 1m4.620s 00:08:23.607 sys 0m10.714s 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 ************************************ 00:08:23.607 END TEST nvmf_lvs_grow 00:08:23.607 ************************************ 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 ************************************ 00:08:23.607 START TEST nvmf_bdev_io_wait 00:08:23.607 ************************************ 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.607 * Looking for test storage... 00:08:23.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.607 02:07:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.512 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:25.513 00:08:25.513 --- 10.0.0.2 ping statistics --- 00:08:25.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.513 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:08:25.513 00:08:25.513 --- 10.0.0.1 ping statistics --- 00:08:25.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.513 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=936606 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 936606 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 936606 ']' 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.513 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.513 [2024-07-27 02:07:53.462915] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:25.513 [2024-07-27 02:07:53.463014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.513 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.513 [2024-07-27 02:07:53.502291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.513 [2024-07-27 02:07:53.528475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.513 [2024-07-27 02:07:53.619105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.513 [2024-07-27 02:07:53.619179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.513 [2024-07-27 02:07:53.619192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.513 [2024-07-27 02:07:53.619219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.513 [2024-07-27 02:07:53.619229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.513 [2024-07-27 02:07:53.619281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.513 [2024-07-27 02:07:53.619324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.513 [2024-07-27 02:07:53.619419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.513 [2024-07-27 02:07:53.619421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 [2024-07-27 02:07:53.784118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 Malloc0 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.771 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 [2024-07-27 02:07:53.845186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=936633 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=936635 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.772 { 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme$subsystem", 00:08:25.772 "trtype": "$TEST_TRANSPORT", 00:08:25.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "$NVMF_PORT", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.772 "hdgst": ${hdgst:-false}, 00:08:25.772 "ddgst": ${ddgst:-false} 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 } 00:08:25.772 EOF 00:08:25.772 )") 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=936637 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.772 { 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme$subsystem", 00:08:25.772 "trtype": "$TEST_TRANSPORT", 00:08:25.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "$NVMF_PORT", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.772 "hdgst": ${hdgst:-false}, 00:08:25.772 "ddgst": ${ddgst:-false} 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 } 00:08:25.772 EOF 00:08:25.772 )") 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=936640 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.772 { 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme$subsystem", 00:08:25.772 "trtype": "$TEST_TRANSPORT", 00:08:25.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "$NVMF_PORT", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.772 "hdgst": ${hdgst:-false}, 00:08:25.772 "ddgst": ${ddgst:-false} 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 } 00:08:25.772 EOF 00:08:25.772 )") 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.772 { 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme$subsystem", 00:08:25.772 "trtype": "$TEST_TRANSPORT", 00:08:25.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "$NVMF_PORT", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.772 "hdgst": ${hdgst:-false}, 00:08:25.772 "ddgst": ${ddgst:-false} 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 } 00:08:25.772 EOF 00:08:25.772 )") 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 936633 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme1", 00:08:25.772 "trtype": "tcp", 00:08:25.772 "traddr": "10.0.0.2", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "4420", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.772 "hdgst": false, 00:08:25.772 "ddgst": false 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 }' 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme1", 00:08:25.772 "trtype": "tcp", 00:08:25.772 "traddr": "10.0.0.2", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "4420", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.772 "hdgst": false, 00:08:25.772 "ddgst": false 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 }' 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme1", 00:08:25.772 "trtype": "tcp", 00:08:25.772 "traddr": "10.0.0.2", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "4420", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.772 "hdgst": false, 00:08:25.772 "ddgst": false 00:08:25.772 }, 00:08:25.772 "method": "bdev_nvme_attach_controller" 00:08:25.772 }' 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:25.772 02:07:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.772 "params": { 00:08:25.772 "name": "Nvme1", 00:08:25.772 "trtype": "tcp", 00:08:25.772 "traddr": "10.0.0.2", 00:08:25.772 "adrfam": "ipv4", 00:08:25.772 "trsvcid": "4420", 00:08:25.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.773 "hdgst": false, 00:08:25.773 "ddgst": false 00:08:25.773 }, 00:08:25.773 "method": "bdev_nvme_attach_controller" 00:08:25.773 }' 00:08:25.773 [2024-07-27 02:07:53.891362] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:25.773 [2024-07-27 02:07:53.891362] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:25.773 [2024-07-27 02:07:53.891362] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:25.773 [2024-07-27 02:07:53.891449] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-27 02:07:53.891450] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-27 02:07:53.891450] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:25.773 --proc-type=auto ] 00:08:25.773 --proc-type=auto ] 00:08:25.773 [2024-07-27 02:07:53.891494] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:25.773 [2024-07-27 02:07:53.891552] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:26.030 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.030 [2024-07-27 02:07:54.030834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.030 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.030 [2024-07-27 02:07:54.061465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.030 [2024-07-27 02:07:54.131005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.030 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.030 [2024-07-27 02:07:54.137384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:26.030 [2024-07-27 02:07:54.160833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.289 [2024-07-27 02:07:54.230731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.289 [2024-07-27 02:07:54.236206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.289 [2024-07-27 02:07:54.260571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.289 [2024-07-27 02:07:54.330646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.289 [2024-07-27 02:07:54.334815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:26.289 [2024-07-27 02:07:54.360980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.289 [2024-07-27 02:07:54.437490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:26.548 Running I/O for 1 seconds... 00:08:26.548 Running I/O for 1 seconds... 00:08:26.548 Running I/O for 1 seconds... 00:08:26.548 Running I/O for 1 seconds... 00:08:27.486 00:08:27.486 Latency(us) 00:08:27.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.486 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:27.486 Nvme1n1 : 1.01 10645.42 41.58 0.00 0.00 11976.06 7087.60 20486.07 00:08:27.486 =================================================================================================================== 00:08:27.486 Total : 10645.42 41.58 0.00 0.00 11976.06 7087.60 20486.07 00:08:27.486 00:08:27.486 Latency(us) 00:08:27.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.487 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:27.487 Nvme1n1 : 1.02 5028.40 19.64 0.00 0.00 25180.43 10194.49 28544.57 00:08:27.487 =================================================================================================================== 00:08:27.487 Total : 5028.40 19.64 0.00 0.00 25180.43 10194.49 28544.57 00:08:27.746 00:08:27.746 Latency(us) 00:08:27.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.746 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:27.746 Nvme1n1 : 1.01 5180.90 20.24 0.00 0.00 24619.24 6456.51 52040.44 00:08:27.746 =================================================================================================================== 00:08:27.746 Total : 5180.90 20.24 0.00 0.00 24619.24 6456.51 52040.44 00:08:27.746 00:08:27.746 Latency(us) 00:08:27.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.746 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:27.746 Nvme1n1 : 1.00 185538.79 724.76 0.00 0.00 687.18 295.82 916.29 00:08:27.746 =================================================================================================================== 00:08:27.746 Total : 185538.79 724.76 0.00 0.00 687.18 295.82 916.29 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 936635 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 936637 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 936640 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.007 rmmod nvme_tcp 00:08:28.007 rmmod nvme_fabrics 00:08:28.007 rmmod nvme_keyring 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 936606 ']' 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 936606 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 936606 ']' 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 936606 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936606 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936606' 00:08:28.007 killing process with pid 936606 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 936606 00:08:28.007 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 936606 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.266 02:07:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.806 00:08:30.806 real 0m7.113s 00:08:30.806 user 0m16.669s 00:08:30.806 sys 0m3.373s 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.806 ************************************ 00:08:30.806 END TEST nvmf_bdev_io_wait 00:08:30.806 ************************************ 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.806 02:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.807 ************************************ 00:08:30.807 START TEST nvmf_queue_depth 00:08:30.807 ************************************ 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:30.807 * Looking for test storage... 00:08:30.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.807 02:07:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.713 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:32.714 00:08:32.714 --- 10.0.0.2 ping statistics --- 00:08:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.714 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:08:32.714 00:08:32.714 --- 10.0.0.1 ping statistics --- 00:08:32.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.714 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=938858 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 938858 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 938858 ']' 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.714 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.714 [2024-07-27 02:08:00.676940] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:32.714 [2024-07-27 02:08:00.677051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.714 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.714 [2024-07-27 02:08:00.715867] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.714 [2024-07-27 02:08:00.741839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.714 [2024-07-27 02:08:00.830145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.714 [2024-07-27 02:08:00.830207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.714 [2024-07-27 02:08:00.830235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.714 [2024-07-27 02:08:00.830246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.714 [2024-07-27 02:08:00.830256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.714 [2024-07-27 02:08:00.830283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 [2024-07-27 02:08:00.980483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.974 02:08:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 Malloc0 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 [2024-07-27 02:08:01.045372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=938883 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 938883 /var/tmp/bdevperf.sock 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 938883 ']' 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.974 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.974 [2024-07-27 02:08:01.091570] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:32.974 [2024-07-27 02:08:01.091629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938883 ] 00:08:32.974 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.974 [2024-07-27 02:08:01.127909] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.234 [2024-07-27 02:08:01.171431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.234 [2024-07-27 02:08:01.265007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.234 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.234 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:33.234 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.234 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.234 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.493 NVMe0n1 00:08:33.493 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.493 02:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.752 Running I/O for 10 seconds... 00:08:43.750 00:08:43.750 Latency(us) 00:08:43.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.750 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.750 Verification LBA range: start 0x0 length 0x4000 00:08:43.750 NVMe0n1 : 10.07 8541.07 33.36 0.00 0.00 119403.75 17767.54 73011.96 00:08:43.750 =================================================================================================================== 00:08:43.750 Total : 8541.07 33.36 0.00 0.00 119403.75 17767.54 73011.96 00:08:43.750 0 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 938883 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 938883 ']' 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 938883 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 938883 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 938883' 00:08:43.750 killing process with pid 938883 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 938883 00:08:43.750 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.750 00:08:43.750 Latency(us) 00:08:43.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.750 =================================================================================================================== 00:08:43.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.750 02:08:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 938883 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.068 rmmod nvme_tcp 00:08:44.068 rmmod nvme_fabrics 00:08:44.068 rmmod nvme_keyring 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 938858 ']' 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 938858 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 938858 ']' 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 938858 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 938858 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 938858' 00:08:44.068 killing process with pid 938858 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 938858 00:08:44.068 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 938858 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.329 02:08:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.868 00:08:46.868 real 0m16.017s 00:08:46.868 user 0m22.544s 00:08:46.868 sys 0m3.028s 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.868 ************************************ 00:08:46.868 END TEST nvmf_queue_depth 00:08:46.868 ************************************ 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.868 ************************************ 00:08:46.868 START TEST nvmf_target_multipath 00:08:46.868 ************************************ 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.868 * Looking for test storage... 00:08:46.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.868 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.869 02:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.776 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.776 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.776 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.776 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:08:48.777 00:08:48.777 --- 10.0.0.2 ping statistics --- 00:08:48.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.777 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:48.777 00:08:48.777 --- 10.0.0.1 ping statistics --- 00:08:48.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.777 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:48.777 only one NIC for nvmf test 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.777 rmmod nvme_tcp 00:08:48.777 rmmod nvme_fabrics 00:08:48.777 rmmod nvme_keyring 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.777 02:08:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.687 00:08:50.687 real 0m4.264s 00:08:50.687 user 0m0.757s 00:08:50.687 sys 0m1.492s 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:50.687 ************************************ 00:08:50.687 END TEST nvmf_target_multipath 00:08:50.687 ************************************ 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.687 ************************************ 00:08:50.687 START TEST nvmf_zcopy 00:08:50.687 ************************************ 00:08:50.687 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:50.947 * Looking for test storage... 00:08:50.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.947 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.948 02:08:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.856 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.856 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.856 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:52.857 00:08:52.857 --- 10.0.0.2 ping statistics --- 00:08:52.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.857 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:52.857 00:08:52.857 --- 10.0.0.1 ping statistics --- 00:08:52.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.857 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=944066 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 944066 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 944066 ']' 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.857 02:08:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.117 [2024-07-27 02:08:21.018426] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:53.117 [2024-07-27 02:08:21.018521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.117 [2024-07-27 02:08:21.055230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.117 [2024-07-27 02:08:21.085159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.117 [2024-07-27 02:08:21.175009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.117 [2024-07-27 02:08:21.175077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.117 [2024-07-27 02:08:21.175104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.117 [2024-07-27 02:08:21.175128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.117 [2024-07-27 02:08:21.175149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.117 [2024-07-27 02:08:21.175188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 [2024-07-27 02:08:21.322907] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 [2024-07-27 02:08:21.339160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 malloc0 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.378 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:53.379 { 00:08:53.379 "params": { 00:08:53.379 "name": "Nvme$subsystem", 00:08:53.379 "trtype": "$TEST_TRANSPORT", 00:08:53.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.379 "adrfam": "ipv4", 00:08:53.379 "trsvcid": "$NVMF_PORT", 00:08:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.379 "hdgst": ${hdgst:-false}, 00:08:53.379 "ddgst": ${ddgst:-false} 00:08:53.379 }, 00:08:53.379 "method": "bdev_nvme_attach_controller" 00:08:53.379 } 00:08:53.379 EOF 00:08:53.379 )") 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:53.379 02:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:53.379 "params": { 00:08:53.379 "name": "Nvme1", 00:08:53.379 "trtype": "tcp", 00:08:53.379 "traddr": "10.0.0.2", 00:08:53.379 "adrfam": "ipv4", 00:08:53.379 "trsvcid": "4420", 00:08:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.379 "hdgst": false, 00:08:53.379 "ddgst": false 00:08:53.379 }, 00:08:53.379 "method": "bdev_nvme_attach_controller" 00:08:53.379 }' 00:08:53.379 [2024-07-27 02:08:21.432694] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:08:53.379 [2024-07-27 02:08:21.432763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944087 ] 00:08:53.379 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.379 [2024-07-27 02:08:21.462680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:53.379 [2024-07-27 02:08:21.494265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.638 [2024-07-27 02:08:21.590857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.897 Running I/O for 10 seconds... 00:09:03.897 00:09:03.897 Latency(us) 00:09:03.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.897 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.897 Verification LBA range: start 0x0 length 0x1000 00:09:03.897 Nvme1n1 : 10.02 4968.44 38.82 0.00 0.00 25698.17 3422.44 39418.69 00:09:03.897 =================================================================================================================== 00:09:03.897 Total : 4968.44 38.82 0.00 0.00 25698.17 3422.44 39418.69 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=945400 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:04.156 { 00:09:04.156 "params": { 00:09:04.156 "name": "Nvme$subsystem", 00:09:04.156 "trtype": "$TEST_TRANSPORT", 00:09:04.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.156 "adrfam": "ipv4", 00:09:04.156 "trsvcid": "$NVMF_PORT", 00:09:04.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.156 "hdgst": ${hdgst:-false}, 00:09:04.156 "ddgst": ${ddgst:-false} 00:09:04.156 }, 00:09:04.156 "method": "bdev_nvme_attach_controller" 00:09:04.156 } 00:09:04.156 EOF 00:09:04.156 )") 00:09:04.156 [2024-07-27 02:08:32.077728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.156 [2024-07-27 02:08:32.077768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:04.156 02:08:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:04.156 "params": { 00:09:04.156 "name": "Nvme1", 00:09:04.156 "trtype": "tcp", 00:09:04.156 "traddr": "10.0.0.2", 00:09:04.156 "adrfam": "ipv4", 00:09:04.156 "trsvcid": "4420", 00:09:04.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.156 "hdgst": false, 00:09:04.156 "ddgst": false 00:09:04.156 }, 00:09:04.156 "method": "bdev_nvme_attach_controller" 00:09:04.156 }' 00:09:04.156 [2024-07-27 02:08:32.085695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.156 [2024-07-27 02:08:32.085725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.156 [2024-07-27 02:08:32.093711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.093738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.101722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.101746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.109741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.109764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.117758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.117780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.117908] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:09:04.157 [2024-07-27 02:08:32.117975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945400 ] 00:09:04.157 [2024-07-27 02:08:32.125785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.125809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.133803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.133825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.141827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.141849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.157 [2024-07-27 02:08:32.149846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.149867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.151367] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.157 [2024-07-27 02:08:32.157887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.157914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.165908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.165935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.173930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.173957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.181618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.157 [2024-07-27 02:08:32.181956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.181983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.190011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.190072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.198014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.198067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.206020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.206047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.214043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.214080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.222072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.222099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.230096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.230135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.238164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.238201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.246146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.246169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.254156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.254179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.262175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.262197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.270200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.270224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.274416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.157 [2024-07-27 02:08:32.278221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.278245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.286241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.286264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.294297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.294332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.302314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.302363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.157 [2024-07-27 02:08:32.310339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.157 [2024-07-27 02:08:32.310389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.318371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.318421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.326400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.326442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.334430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.334471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.342433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.342463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.350483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.350520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.358504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.358543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.366495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.366525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.374515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.374543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.382522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.382544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.390561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.390587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.398590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.398627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.406619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.406645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.414652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.414675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.422670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.422692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.430694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.430716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.438715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.438737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.446754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.446780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.454780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.454805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.462816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.462845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.470839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.470867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.478862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.478889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.486886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.486913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.494908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.494935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.502931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.502958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.510959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.510988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.518978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.519005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.527002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.527029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.535027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.535054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.543051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.543086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.551090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.551120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.559102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.559142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.567139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.567163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.417 [2024-07-27 02:08:32.575159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.417 [2024-07-27 02:08:32.575181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.583171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.583194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.591195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.591218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.599209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.599234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.607895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.607926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.615265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.615290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 Running I/O for 5 seconds... 00:09:04.676 [2024-07-27 02:08:32.623282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.623306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.639849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.639889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.653250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.653283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.666960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.666992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.680701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.680733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.694810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.694848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.709197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.709231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.723356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.723395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.738128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.738162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.752297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.752335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.766201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.766244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.676 [2024-07-27 02:08:32.779787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.676 [2024-07-27 02:08:32.779821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.677 [2024-07-27 02:08:32.793451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.677 [2024-07-27 02:08:32.793485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.677 [2024-07-27 02:08:32.806749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.677 [2024-07-27 02:08:32.806784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.677 [2024-07-27 02:08:32.820656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.677 [2024-07-27 02:08:32.820691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.677 [2024-07-27 02:08:32.833276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.677 [2024-07-27 02:08:32.833304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.845765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.845813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.858970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.859003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.872289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.872324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.885580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.885614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.898272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.898306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.911451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.911484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.925169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.925203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.938783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.938816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.951775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.951818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.964610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.964659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.977017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.977066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:32.989997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:32.990024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.002911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.002943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.015863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.015910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.027959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.027992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.041235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.041264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.053623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.053652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.065959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.065987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.078973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.079008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.935 [2024-07-27 02:08:33.092569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.935 [2024-07-27 02:08:33.092605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.106437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.106470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.119742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.119777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.132741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.132776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.146152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.146187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.158865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.158898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.171676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.171711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.184935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.184981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.198149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.198183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.210783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.210816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.224148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.224182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.237945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.237994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.251872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.251905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.265301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.265330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.277880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.277914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.290643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.290677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.303747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.303775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.316814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.316861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.329345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.329388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.193 [2024-07-27 02:08:33.341562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.193 [2024-07-27 02:08:33.341591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.354540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.354570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.367093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.367132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.379906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.379940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.392188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.392217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.405503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.405536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.418676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.418705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.431726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.431754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.443886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.443915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.456671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.456705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.468797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.468831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.482148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.482182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.495301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.495334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.507878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.507912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.520914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.520949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.533300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.533329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.546383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.546411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.559126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.559160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.572090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.572119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.584554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.584582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.597340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.597374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.452 [2024-07-27 02:08:33.610241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.452 [2024-07-27 02:08:33.610275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.623106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.623135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.635518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.635566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.649432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.649459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.661961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.661988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.674478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.674506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.686893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.686940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.700127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.700160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.712899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.712933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.725609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.725656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.739174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.739207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.751553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.751600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.765028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.765083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.777211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.777245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.791176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.791210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.804292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.804321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.816551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.816585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.830389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.830424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.843409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.843442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.856145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.856173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.712 [2024-07-27 02:08:33.868389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.712 [2024-07-27 02:08:33.868416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.881294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.881326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.894441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.894474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.907360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.907394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.920635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.920668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.933629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.933657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.946282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.946311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.959223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.959257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.972314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.972361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.986105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.986148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:33.999471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:33.999504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.012130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.012165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.025083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.025117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.037694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.037722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.050133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.050161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.062731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.062764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.075620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.075653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.088460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.088493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.101391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.101425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.114706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.114738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.972 [2024-07-27 02:08:34.127900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.972 [2024-07-27 02:08:34.127935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.140595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.140629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.154468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.154500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.168025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.168072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.180640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.180675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.193806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.193840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.206644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.206692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.219711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.219745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.232459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.232502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.245216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.245245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.258158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.258187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.271265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.271299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.284236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.284270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.297475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.297510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.310798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.310832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.323890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.323919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.336711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.232 [2024-07-27 02:08:34.336746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.232 [2024-07-27 02:08:34.349922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.233 [2024-07-27 02:08:34.349955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.233 [2024-07-27 02:08:34.363204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.233 [2024-07-27 02:08:34.363238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.233 [2024-07-27 02:08:34.376320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.233 [2024-07-27 02:08:34.376354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.233 [2024-07-27 02:08:34.389442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.233 [2024-07-27 02:08:34.389476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.402773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.402808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.416637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.416671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.430236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.430270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.443518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.443552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.457171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.457206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.470830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.470862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.484770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.484813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.497827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.497874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.511321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.511370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.524454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.524488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.537676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.537709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.551153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.551187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.564290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.564332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.577119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.577147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.589805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.589852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.603455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.603496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.616107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.616142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.628898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.628931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-07-27 02:08:34.641766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-07-27 02:08:34.641801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.654499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.654529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.666757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.666785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.679490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.679532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.692145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.692173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.704943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.704974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.717424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.717471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.730306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.730355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.742941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.742974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.755502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.755543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.768663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.768696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.781957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.782000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.794399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.794427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.806235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.806263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.818368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.818395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.831001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.831035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.843390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.843417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.855871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.855900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.867350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.867394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.880578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.880606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.892932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.892965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.752 [2024-07-27 02:08:34.906362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.752 [2024-07-27 02:08:34.906390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.012 [2024-07-27 02:08:34.918690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.918719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.931321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.931355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.943825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.943859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.955876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.955910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.968692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.968720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.981391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.981420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:34.994195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:34.994230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.007147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.007181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.020220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.020254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.033004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.033032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.045587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.045621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.058640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.058669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.071815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.071844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.084549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.084577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.097303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.097332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.110042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.110079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.123158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.123186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.136268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.136303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.149897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.149931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.013 [2024-07-27 02:08:35.162868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.013 [2024-07-27 02:08:35.162896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.175295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.175331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.189516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.189549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.202182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.202216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.215087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.215115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.227729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.227761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.240353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.240386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.254109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.254143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.266765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.266798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.278835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.278868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.292562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.292595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.304846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.304881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.318376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.318410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.331271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.331307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.345527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.345563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.358278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.358307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.370443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.370476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.384529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.384562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.397418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.397462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.411500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.411534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-07-27 02:08:35.423694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-07-27 02:08:35.423728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.436658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.436688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.448976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.449004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.461848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.461876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.475409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.475442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.488605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.488638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.502234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.502268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.515320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.515368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.528725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.528757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.542692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.570 [2024-07-27 02:08:35.542724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.570 [2024-07-27 02:08:35.554889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.554921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.569108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.569143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.581973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.582000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.594751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.594779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.607362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.607397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.620737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.620771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.634010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.634068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.647603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.647636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.660873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.660908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.674537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.674583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.687361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.687408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.571 [2024-07-27 02:08:35.701277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.571 [2024-07-27 02:08:35.701305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.713908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.713942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.726829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.726862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.740047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.740091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.753681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.753714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.766576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.766622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.779330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.779378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.791503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.791536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.805113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.805147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.818479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.818512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.832281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.832314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.845959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.845994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.859772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.859805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.873588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.873634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.887504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.887537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.899559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.899586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.912873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.912901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.925421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.925471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.938714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.938747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.952228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.952273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.965349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.965383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.978191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.978225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.832 [2024-07-27 02:08:35.990656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.832 [2024-07-27 02:08:35.990684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.003501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.003531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.016218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.016247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.028926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.028955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.041277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.041305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.053826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.053873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.065830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.065864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.079281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.079309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.091624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.091652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.104298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.104331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.116932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.116965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.129992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.130019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.143143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.143176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.155584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.155611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.168225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.168254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.181169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.181197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.193762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.193797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.205689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.205717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.218380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.218408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.231458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.231490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.092 [2024-07-27 02:08:36.244965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.092 [2024-07-27 02:08:36.244994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.257399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.257427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.270333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.270380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.283301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.283336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.296319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.296366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.308885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.308912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.321537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.321584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.334214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.334247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.347388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.347415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.360109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.360138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.372828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.372855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.385429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.385462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.398001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.398049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.411694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.411727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.424392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.424425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.437280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.437316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.448652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.448682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.462377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.462411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.474830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.474864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.487656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.487685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.351 [2024-07-27 02:08:36.499978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.351 [2024-07-27 02:08:36.500012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.512960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.513011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.525749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.525777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.538559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.538606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.551856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.551889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.565271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.565307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.578000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.578049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.590785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.590813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.603922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.603955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.616832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.616865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.630004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.630050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.642906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.642939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.656417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.656446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.668869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.668896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.681482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.681522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.694883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.694915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.708100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.708134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.721008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.721065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.733908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.733941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.747228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.747262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.610 [2024-07-27 02:08:36.760563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.610 [2024-07-27 02:08:36.760595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.773311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.773346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.786426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.786459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.799896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.799929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.812770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.812820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.825235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.825264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.838712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.838744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.852503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.852550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.865540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.865587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.878602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.878630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.892211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.892244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.905995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.906029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.918399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.918445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.932229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.932263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.945604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.945631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.958630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.958663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.971711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.971739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.985178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.985213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:36.998307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:36.998336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:37.011001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:37.011033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.870 [2024-07-27 02:08:37.023037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.870 [2024-07-27 02:08:37.023080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.036835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.036869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.049828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.049860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.063103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.063132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.075720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.075753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.088918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.088964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.102106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.102141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.114814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.114847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.128493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.128526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.140699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.140732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.154218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.154245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.166897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.166930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.180271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.180304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.193807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.193855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.207322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.207356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.220743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.220777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.233817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.233852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.246989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.247017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.259682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.259711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.272812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.272845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.131 [2024-07-27 02:08:37.285823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.131 [2024-07-27 02:08:37.285870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.298162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.298191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.311201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.311234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.324151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.324185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.336666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.336714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.349618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.349652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.362932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.362980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.376204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.376233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.388808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.388842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.401868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.401902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.415009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.415043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.428151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.428185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.441469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.441503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.454169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.454197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.466801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.466835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.479788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.479822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.493494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.493540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.506807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.506839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.519807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.519841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.532457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.532484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.391 [2024-07-27 02:08:37.545669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.391 [2024-07-27 02:08:37.545702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.558816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.558850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.571746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.571778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.583660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.583692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.597325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.597355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.613024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.613078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.624973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.625008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.637661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.637696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 00:09:09.651 Latency(us) 00:09:09.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.651 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.651 Nvme1n1 : 5.01 9764.18 76.28 0.00 0.00 13088.22 5097.24 29321.29 00:09:09.651 =================================================================================================================== 00:09:09.651 Total : 9764.18 76.28 0.00 0.00 13088.22 5097.24 29321.29 00:09:09.651 [2024-07-27 02:08:37.645048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.645086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.653043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.653077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.661091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.661124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.669148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.669193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.677172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.677221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.685181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.685228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.693201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.693246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.701241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.701289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.709241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.709287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.717264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.717310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.725297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.725347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.733328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.733378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.741337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.741384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.749348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.749391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.757373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.757417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.765400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.765443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.773421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.773474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.781422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.781459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.789429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.789459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.797492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.797540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.651 [2024-07-27 02:08:37.805505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.651 [2024-07-27 02:08:37.805549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.813517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.813551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.821518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.821544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.829584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.829628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.837606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.837650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.845592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.845617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.853602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.853624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 [2024-07-27 02:08:37.861623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.911 [2024-07-27 02:08:37.861644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (945400) - No such process 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 945400 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.911 delay0 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.911 02:08:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:09.911 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.911 [2024-07-27 02:08:37.982453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:18.039 Initializing NVMe Controllers 00:09:18.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:18.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:18.039 Initialization complete. Launching workers. 00:09:18.039 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 12847 00:09:18.039 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12961, failed to submit 127 00:09:18.039 success 12868, unsuccess 93, failed 0 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.039 rmmod nvme_tcp 00:09:18.039 rmmod nvme_fabrics 00:09:18.039 rmmod nvme_keyring 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 944066 ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 944066 ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944066' 00:09:18.039 killing process with pid 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 944066 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.039 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.040 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.040 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.040 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.040 02:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:19.420 00:09:19.420 real 0m28.587s 00:09:19.420 user 0m39.379s 00:09:19.420 sys 0m10.007s 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.420 ************************************ 00:09:19.420 END TEST nvmf_zcopy 00:09:19.420 ************************************ 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.420 ************************************ 00:09:19.420 START TEST nvmf_nmic 00:09:19.420 ************************************ 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.420 * Looking for test storage... 00:09:19.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.420 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.421 02:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:21.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:21.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:21.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:21.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.332 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.333 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:09:21.592 00:09:21.592 --- 10.0.0.2 ping statistics --- 00:09:21.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.592 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:21.592 00:09:21.592 --- 10.0.0.1 ping statistics --- 00:09:21.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.592 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=948803 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 948803 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 948803 ']' 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.592 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.592 [2024-07-27 02:08:49.662750] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:09:21.592 [2024-07-27 02:08:49.662833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.592 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.592 [2024-07-27 02:08:49.699654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.592 [2024-07-27 02:08:49.729818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.851 [2024-07-27 02:08:49.822814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.851 [2024-07-27 02:08:49.822877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.851 [2024-07-27 02:08:49.822894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.851 [2024-07-27 02:08:49.822907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.851 [2024-07-27 02:08:49.822919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.851 [2024-07-27 02:08:49.823010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.851 [2024-07-27 02:08:49.823096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.851 [2024-07-27 02:08:49.823125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.851 [2024-07-27 02:08:49.823128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.851 [2024-07-27 02:08:49.973551] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.851 02:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.851 Malloc0 00:09:21.851 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.851 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.851 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.852 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 [2024-07-27 02:08:50.024703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:22.110 test case1: single bdev can't be used in multiple subsystems 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.110 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 [2024-07-27 02:08:50.048565] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:22.110 [2024-07-27 02:08:50.048596] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:22.110 [2024-07-27 02:08:50.048611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.111 request: 00:09:22.111 { 00:09:22.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:22.111 "namespace": { 00:09:22.111 "bdev_name": "Malloc0", 00:09:22.111 "no_auto_visible": false 00:09:22.111 }, 00:09:22.111 "method": "nvmf_subsystem_add_ns", 00:09:22.111 "req_id": 1 00:09:22.111 } 00:09:22.111 Got JSON-RPC error response 00:09:22.111 response: 00:09:22.111 { 00:09:22.111 "code": -32602, 00:09:22.111 "message": "Invalid parameters" 00:09:22.111 } 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:22.111 Adding namespace failed - expected result. 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:22.111 test case2: host connect to nvmf target in multiple paths 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.111 [2024-07-27 02:08:50.056690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.111 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.678 02:08:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:23.244 02:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.244 02:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.244 02:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.244 02:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.244 02:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:25.773 02:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:25.773 [global] 00:09:25.773 thread=1 00:09:25.773 invalidate=1 00:09:25.773 rw=write 00:09:25.773 time_based=1 00:09:25.773 runtime=1 00:09:25.773 ioengine=libaio 00:09:25.773 direct=1 00:09:25.773 bs=4096 00:09:25.773 iodepth=1 00:09:25.773 norandommap=0 00:09:25.773 numjobs=1 00:09:25.773 00:09:25.773 verify_dump=1 00:09:25.773 verify_backlog=512 00:09:25.773 verify_state_save=0 00:09:25.773 do_verify=1 00:09:25.773 verify=crc32c-intel 00:09:25.773 [job0] 00:09:25.773 filename=/dev/nvme0n1 00:09:25.773 Could not set queue depth (nvme0n1) 00:09:25.773 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.773 fio-3.35 00:09:25.773 Starting 1 thread 00:09:26.707 00:09:26.707 job0: (groupid=0, jobs=1): err= 0: pid=949336: Sat Jul 27 02:08:54 2024 00:09:26.707 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:09:26.707 slat (nsec): min=12308, max=34341, avg=27209.91, stdev=9152.01 00:09:26.707 clat (usec): min=40894, max=41265, avg=40976.46, stdev=70.77 00:09:26.707 lat (usec): min=40928, max=41277, avg=41003.67, stdev=66.23 00:09:26.707 clat percentiles (usec): 00:09:26.707 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:26.707 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:26.707 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:26.707 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:26.707 | 99.99th=[41157] 00:09:26.707 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:26.707 slat (nsec): min=8076, max=63832, avg=21226.31, stdev=9610.38 00:09:26.707 clat (usec): min=192, max=371, avg=235.23, stdev=37.39 00:09:26.707 lat (usec): min=201, max=404, avg=256.46, stdev=44.02 00:09:26.707 clat percentiles (usec): 00:09:26.707 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:09:26.707 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:26.707 | 70.00th=[ 239], 80.00th=[ 262], 90.00th=[ 302], 95.00th=[ 318], 00:09:26.707 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 371], 00:09:26.707 | 99.99th=[ 371] 00:09:26.707 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.707 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.707 lat (usec) : 250=74.53%, 500=21.35% 00:09:26.707 lat (msec) : 50=4.12% 00:09:26.707 cpu : usr=0.58%, sys=1.06%, ctx=534, majf=0, minf=2 00:09:26.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.707 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.707 00:09:26.707 Run status group 0 (all jobs): 00:09:26.707 READ: bw=84.9KiB/s (87.0kB/s), 84.9KiB/s-84.9KiB/s (87.0kB/s-87.0kB/s), io=88.0KiB (90.1kB), run=1036-1036msec 00:09:26.707 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:09:26.707 00:09:26.707 Disk stats (read/write): 00:09:26.707 nvme0n1: ios=68/512, merge=0/0, ticks=786/112, in_queue=898, util=92.38% 00:09:26.707 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.966 rmmod nvme_tcp 00:09:26.966 rmmod nvme_fabrics 00:09:26.966 rmmod nvme_keyring 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 948803 ']' 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 948803 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 948803 ']' 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 948803 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 948803 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 948803' 00:09:26.966 killing process with pid 948803 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 948803 00:09:26.966 02:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 948803 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.226 02:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.135 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.135 00:09:29.135 real 0m9.832s 00:09:29.135 user 0m22.596s 00:09:29.135 sys 0m2.178s 00:09:29.135 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.135 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.135 ************************************ 00:09:29.135 END TEST nvmf_nmic 00:09:29.135 ************************************ 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.419 ************************************ 00:09:29.419 START TEST nvmf_fio_target 00:09:29.419 ************************************ 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.419 * Looking for test storage... 00:09:29.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.419 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.420 02:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.319 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.320 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.320 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:09:31.320 00:09:31.320 --- 10.0.0.2 ping statistics --- 00:09:31.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.320 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:09:31.321 00:09:31.321 --- 10.0.0.1 ping statistics --- 00:09:31.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.321 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=951519 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 951519 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 951519 ']' 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.321 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.579 [2024-07-27 02:08:59.485389] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:09:31.579 [2024-07-27 02:08:59.485466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.579 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.579 [2024-07-27 02:08:59.522840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:31.579 [2024-07-27 02:08:59.555217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.579 [2024-07-27 02:08:59.646232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.579 [2024-07-27 02:08:59.646289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.579 [2024-07-27 02:08:59.646306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.579 [2024-07-27 02:08:59.646320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.579 [2024-07-27 02:08:59.646332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.579 [2024-07-27 02:08:59.646425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.579 [2024-07-27 02:08:59.646482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.579 [2024-07-27 02:08:59.646526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.579 [2024-07-27 02:08:59.646528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.837 02:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.095 [2024-07-27 02:09:00.029005] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.095 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.353 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:32.353 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.610 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:32.610 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.868 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:32.868 02:09:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.125 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:33.125 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:33.383 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.641 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:33.641 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.899 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:33.899 02:09:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.157 02:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:34.157 02:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:34.414 02:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.672 02:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.672 02:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.930 02:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.930 02:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.188 02:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.445 [2024-07-27 02:09:03.508575] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.445 02:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:35.702 02:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:35.960 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:36.525 02:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:39.053 02:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.053 [global] 00:09:39.053 thread=1 00:09:39.053 invalidate=1 00:09:39.053 rw=write 00:09:39.053 time_based=1 00:09:39.053 runtime=1 00:09:39.053 ioengine=libaio 00:09:39.053 direct=1 00:09:39.053 bs=4096 00:09:39.053 iodepth=1 00:09:39.053 norandommap=0 00:09:39.053 numjobs=1 00:09:39.053 00:09:39.053 verify_dump=1 00:09:39.053 verify_backlog=512 00:09:39.053 verify_state_save=0 00:09:39.053 do_verify=1 00:09:39.053 verify=crc32c-intel 00:09:39.053 [job0] 00:09:39.053 filename=/dev/nvme0n1 00:09:39.053 [job1] 00:09:39.053 filename=/dev/nvme0n2 00:09:39.053 [job2] 00:09:39.053 filename=/dev/nvme0n3 00:09:39.053 [job3] 00:09:39.053 filename=/dev/nvme0n4 00:09:39.053 Could not set queue depth (nvme0n1) 00:09:39.053 Could not set queue depth (nvme0n2) 00:09:39.053 Could not set queue depth (nvme0n3) 00:09:39.053 Could not set queue depth (nvme0n4) 00:09:39.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.053 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.053 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.053 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.053 fio-3.35 00:09:39.053 Starting 4 threads 00:09:39.986 00:09:39.986 job0: (groupid=0, jobs=1): err= 0: pid=952593: Sat Jul 27 02:09:08 2024 00:09:39.986 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:09:39.986 slat (nsec): min=11787, max=29368, avg=15792.05, stdev=4809.28 00:09:39.986 clat (usec): min=40775, max=43980, avg=41174.23, stdev=667.09 00:09:39.986 lat (usec): min=40793, max=43996, avg=41190.02, stdev=666.75 00:09:39.986 clat percentiles (usec): 00:09:39.986 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:39.986 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.986 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:39.986 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:39.986 | 99.99th=[43779] 00:09:39.986 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:39.986 slat (nsec): min=7483, max=62177, avg=10514.61, stdev=4069.05 00:09:39.986 clat (usec): min=195, max=758, avg=246.33, stdev=37.10 00:09:39.986 lat (usec): min=204, max=765, avg=256.85, stdev=38.19 00:09:39.986 clat percentiles (usec): 00:09:39.986 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:09:39.986 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 245], 00:09:39.986 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 293], 00:09:39.986 | 99.00th=[ 363], 99.50th=[ 416], 99.90th=[ 758], 99.95th=[ 758], 00:09:39.986 | 99.99th=[ 758] 00:09:39.986 bw ( KiB/s): min= 4096, max= 4096, per=29.86%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.986 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.986 lat (usec) : 250=67.79%, 500=27.72%, 750=0.19%, 1000=0.19% 00:09:39.986 lat (msec) : 50=4.12% 00:09:39.986 cpu : usr=0.67%, sys=0.29%, ctx=535, majf=0, minf=1 00:09:39.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.986 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.986 job1: (groupid=0, jobs=1): err= 0: pid=952594: Sat Jul 27 02:09:08 2024 00:09:39.986 read: IOPS=812, BW=3249KiB/s (3326kB/s)(3268KiB/1006msec) 00:09:39.986 slat (nsec): min=5800, max=56366, avg=10847.11, stdev=6124.72 00:09:39.986 clat (usec): min=386, max=41042, avg=888.61, stdev=4002.29 00:09:39.986 lat (usec): min=395, max=41056, avg=899.46, stdev=4003.10 00:09:39.986 clat percentiles (usec): 00:09:39.986 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 437], 00:09:39.986 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 474], 00:09:39.986 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 523], 95.00th=[ 578], 00:09:39.986 | 99.00th=[ 9241], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.986 | 99.99th=[41157] 00:09:39.986 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:09:39.986 slat (nsec): min=7484, max=55046, avg=13030.56, stdev=6219.29 00:09:39.986 clat (usec): min=196, max=1334, avg=244.29, stdev=45.24 00:09:39.986 lat (usec): min=204, max=1345, avg=257.32, stdev=46.11 00:09:39.986 clat percentiles (usec): 00:09:39.986 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:09:39.986 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:09:39.986 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 293], 00:09:39.986 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 750], 99.95th=[ 1336], 00:09:39.986 | 99.99th=[ 1336] 00:09:39.986 bw ( KiB/s): min= 3424, max= 4768, per=29.86%, avg=4096.00, stdev=950.35, samples=2 00:09:39.986 iops : min= 856, max= 1192, avg=1024.00, stdev=237.59, samples=2 00:09:39.986 lat (usec) : 250=39.71%, 500=52.85%, 750=6.63% 00:09:39.986 lat (msec) : 2=0.22%, 4=0.05%, 10=0.11%, 50=0.43% 00:09:39.986 cpu : usr=1.99%, sys=2.69%, ctx=1843, majf=0, minf=1 00:09:39.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.987 issued rwts: total=817,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.987 job2: (groupid=0, jobs=1): err= 0: pid=952595: Sat Jul 27 02:09:08 2024 00:09:39.987 read: IOPS=985, BW=3943KiB/s (4037kB/s)(4120KiB/1045msec) 00:09:39.987 slat (nsec): min=5643, max=52463, avg=11953.12, stdev=6730.74 00:09:39.987 clat (usec): min=304, max=44765, avg=622.62, stdev=3139.86 00:09:39.987 lat (usec): min=311, max=44779, avg=634.57, stdev=3140.30 00:09:39.987 clat percentiles (usec): 00:09:39.987 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:09:39.987 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:09:39.987 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 469], 95.00th=[ 498], 00:09:39.987 | 99.00th=[ 807], 99.50th=[40633], 99.90th=[41157], 99.95th=[44827], 00:09:39.987 | 99.99th=[44827] 00:09:39.987 write: IOPS=1469, BW=5879KiB/s (6021kB/s)(6144KiB/1045msec); 0 zone resets 00:09:39.987 slat (nsec): min=6955, max=39533, avg=11026.07, stdev=4600.19 00:09:39.987 clat (usec): min=202, max=438, avg=237.94, stdev=25.50 00:09:39.987 lat (usec): min=210, max=462, avg=248.97, stdev=26.97 00:09:39.987 clat percentiles (usec): 00:09:39.987 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:09:39.987 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 00:09:39.987 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:09:39.987 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 441], 00:09:39.987 | 99.99th=[ 441] 00:09:39.987 bw ( KiB/s): min= 4096, max= 8192, per=44.79%, avg=6144.00, stdev=2896.31, samples=2 00:09:39.987 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:39.987 lat (usec) : 250=50.16%, 500=48.05%, 750=1.29%, 1000=0.16% 00:09:39.987 lat (msec) : 2=0.12%, 50=0.23% 00:09:39.987 cpu : usr=1.82%, sys=4.12%, ctx=2566, majf=0, minf=2 00:09:39.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.987 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.987 job3: (groupid=0, jobs=1): err= 0: pid=952596: Sat Jul 27 02:09:08 2024 00:09:39.987 read: IOPS=409, BW=1637KiB/s (1676kB/s)(1660KiB/1014msec) 00:09:39.987 slat (nsec): min=6204, max=44176, avg=9456.10, stdev=4374.34 00:09:39.987 clat (usec): min=321, max=42072, avg=2099.77, stdev=8157.30 00:09:39.987 lat (usec): min=327, max=42086, avg=2109.23, stdev=8158.74 00:09:39.987 clat percentiles (usec): 00:09:39.987 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:09:39.987 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 420], 00:09:39.987 | 70.00th=[ 478], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 906], 00:09:39.987 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.987 | 99.99th=[42206] 00:09:39.987 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:39.987 slat (nsec): min=8447, max=35942, avg=11825.44, stdev=3973.94 00:09:39.987 clat (usec): min=216, max=708, avg=253.11, stdev=25.40 00:09:39.987 lat (usec): min=225, max=721, avg=264.94, stdev=26.19 00:09:39.987 clat percentiles (usec): 00:09:39.987 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 243], 00:09:39.987 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:09:39.987 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:09:39.987 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 709], 99.95th=[ 709], 00:09:39.987 | 99.99th=[ 709] 00:09:39.987 bw ( KiB/s): min= 4096, max= 4096, per=29.86%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.987 lat (usec) : 250=30.20%, 500=64.40%, 750=2.70%, 1000=0.65% 00:09:39.987 lat (msec) : 2=0.22%, 50=1.83% 00:09:39.987 cpu : usr=0.69%, sys=1.28%, ctx=928, majf=0, minf=1 00:09:39.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.987 issued rwts: total=415,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.987 00:09:39.987 Run status group 0 (all jobs): 00:09:39.987 READ: bw=8743KiB/s (8952kB/s), 84.6KiB/s-3943KiB/s (86.6kB/s-4037kB/s), io=9136KiB (9355kB), run=1006-1045msec 00:09:39.987 WRITE: bw=13.4MiB/s (14.0MB/s), 1969KiB/s-5879KiB/s (2016kB/s-6021kB/s), io=14.0MiB (14.7MB), run=1006-1045msec 00:09:39.987 00:09:39.987 Disk stats (read/write): 00:09:39.987 nvme0n1: ios=67/512, merge=0/0, ticks=1189/127, in_queue=1316, util=95.09% 00:09:39.987 nvme0n2: ios=863/1024, merge=0/0, ticks=1476/238, in_queue=1714, util=97.76% 00:09:39.987 nvme0n3: ios=1025/1536, merge=0/0, ticks=412/345, in_queue=757, util=88.88% 00:09:39.987 nvme0n4: ios=467/512, merge=0/0, ticks=1068/126, in_queue=1194, util=97.88% 00:09:39.987 02:09:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:40.244 [global] 00:09:40.244 thread=1 00:09:40.244 invalidate=1 00:09:40.245 rw=randwrite 00:09:40.245 time_based=1 00:09:40.245 runtime=1 00:09:40.245 ioengine=libaio 00:09:40.245 direct=1 00:09:40.245 bs=4096 00:09:40.245 iodepth=1 00:09:40.245 norandommap=0 00:09:40.245 numjobs=1 00:09:40.245 00:09:40.245 verify_dump=1 00:09:40.245 verify_backlog=512 00:09:40.245 verify_state_save=0 00:09:40.245 do_verify=1 00:09:40.245 verify=crc32c-intel 00:09:40.245 [job0] 00:09:40.245 filename=/dev/nvme0n1 00:09:40.245 [job1] 00:09:40.245 filename=/dev/nvme0n2 00:09:40.245 [job2] 00:09:40.245 filename=/dev/nvme0n3 00:09:40.245 [job3] 00:09:40.245 filename=/dev/nvme0n4 00:09:40.245 Could not set queue depth (nvme0n1) 00:09:40.245 Could not set queue depth (nvme0n2) 00:09:40.245 Could not set queue depth (nvme0n3) 00:09:40.245 Could not set queue depth (nvme0n4) 00:09:40.245 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.245 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.245 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.245 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.245 fio-3.35 00:09:40.245 Starting 4 threads 00:09:41.618 00:09:41.618 job0: (groupid=0, jobs=1): err= 0: pid=953348: Sat Jul 27 02:09:09 2024 00:09:41.618 read: IOPS=1149, BW=4598KiB/s (4709kB/s)(4704KiB/1023msec) 00:09:41.618 slat (nsec): min=4435, max=34192, avg=10959.63, stdev=5045.86 00:09:41.618 clat (usec): min=289, max=41006, avg=532.99, stdev=2639.10 00:09:41.618 lat (usec): min=294, max=41018, avg=543.95, stdev=2639.28 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:09:41.618 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:09:41.618 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[ 490], 00:09:41.618 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:09:41.618 | 99.99th=[41157] 00:09:41.618 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:09:41.618 slat (nsec): min=5894, max=59562, avg=9691.73, stdev=5352.74 00:09:41.618 clat (usec): min=181, max=626, avg=233.87, stdev=55.23 00:09:41.618 lat (usec): min=188, max=636, avg=243.56, stdev=57.74 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:09:41.618 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:09:41.618 | 70.00th=[ 231], 80.00th=[ 251], 90.00th=[ 326], 95.00th=[ 363], 00:09:41.618 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 562], 99.95th=[ 627], 00:09:41.618 | 99.99th=[ 627] 00:09:41.618 bw ( KiB/s): min= 4096, max= 8192, per=42.66%, avg=6144.00, stdev=2896.31, samples=2 00:09:41.618 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:41.618 lat (usec) : 250=45.13%, 500=52.91%, 750=1.77% 00:09:41.618 lat (msec) : 50=0.18% 00:09:41.618 cpu : usr=2.05%, sys=2.35%, ctx=2712, majf=0, minf=1 00:09:41.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 issued rwts: total=1176,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.618 job1: (groupid=0, jobs=1): err= 0: pid=953363: Sat Jul 27 02:09:09 2024 00:09:41.618 read: IOPS=18, BW=74.4KiB/s (76.1kB/s)(76.0KiB/1022msec) 00:09:41.618 slat (nsec): min=12882, max=29028, avg=15281.11, stdev=3651.51 00:09:41.618 clat (usec): min=40831, max=41320, avg=40994.73, stdev=108.71 00:09:41.618 lat (usec): min=40848, max=41333, avg=41010.02, stdev=107.50 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:41.618 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:41.618 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:41.618 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.618 | 99.99th=[41157] 00:09:41.618 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:41.618 slat (nsec): min=6262, max=70394, avg=18372.60, stdev=7224.77 00:09:41.618 clat (usec): min=214, max=1272, avg=451.02, stdev=176.22 00:09:41.618 lat (usec): min=234, max=1296, avg=469.39, stdev=177.69 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 231], 5.00th=[ 269], 10.00th=[ 310], 20.00th=[ 338], 00:09:41.618 | 30.00th=[ 351], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 429], 00:09:41.618 | 70.00th=[ 469], 80.00th=[ 545], 90.00th=[ 701], 95.00th=[ 807], 00:09:41.618 | 99.00th=[ 1156], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:41.618 | 99.99th=[ 1270] 00:09:41.618 bw ( KiB/s): min= 4096, max= 4096, per=28.44%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.618 lat (usec) : 250=3.58%, 500=69.68%, 750=16.20%, 1000=4.71% 00:09:41.618 lat (msec) : 2=2.26%, 50=3.58% 00:09:41.618 cpu : usr=0.49%, sys=0.88%, ctx=531, majf=0, minf=2 00:09:41.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.618 job2: (groupid=0, jobs=1): err= 0: pid=953373: Sat Jul 27 02:09:09 2024 00:09:41.618 read: IOPS=19, BW=78.2KiB/s (80.1kB/s)(80.0KiB/1023msec) 00:09:41.618 slat (nsec): min=11762, max=26176, avg=17461.00, stdev=4532.94 00:09:41.618 clat (usec): min=40625, max=42052, avg=41314.30, stdev=510.83 00:09:41.618 lat (usec): min=40637, max=42068, avg=41331.77, stdev=510.83 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:41.618 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:41.618 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:41.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:41.618 | 99.99th=[42206] 00:09:41.618 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:41.618 slat (nsec): min=10641, max=51896, avg=14085.31, stdev=3920.86 00:09:41.618 clat (usec): min=236, max=1368, avg=363.82, stdev=191.76 00:09:41.618 lat (usec): min=248, max=1388, avg=377.90, stdev=193.39 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:09:41.618 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:09:41.618 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 441], 95.00th=[ 840], 00:09:41.618 | 99.00th=[ 1188], 99.50th=[ 1319], 99.90th=[ 1369], 99.95th=[ 1369], 00:09:41.618 | 99.99th=[ 1369] 00:09:41.618 bw ( KiB/s): min= 4096, max= 4096, per=28.44%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.618 lat (usec) : 250=3.76%, 500=83.83%, 750=2.07%, 1000=3.76% 00:09:41.618 lat (msec) : 2=2.82%, 50=3.76% 00:09:41.618 cpu : usr=0.49%, sys=0.49%, ctx=534, majf=0, minf=1 00:09:41.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.618 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.618 job3: (groupid=0, jobs=1): err= 0: pid=953382: Sat Jul 27 02:09:09 2024 00:09:41.618 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:41.618 slat (nsec): min=5684, max=31682, avg=8275.62, stdev=3111.17 00:09:41.618 clat (usec): min=314, max=41343, avg=622.23, stdev=2832.55 00:09:41.618 lat (usec): min=321, max=41351, avg=630.50, stdev=2832.99 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:09:41.618 | 30.00th=[ 416], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:09:41.618 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 510], 00:09:41.618 | 99.00th=[ 529], 99.50th=[ 627], 99.90th=[41157], 99.95th=[41157], 00:09:41.618 | 99.99th=[41157] 00:09:41.618 write: IOPS=1121, BW=4488KiB/s (4595kB/s)(4492KiB/1001msec); 0 zone resets 00:09:41.618 slat (nsec): min=7384, max=72558, avg=13107.54, stdev=6911.58 00:09:41.618 clat (usec): min=192, max=1263, avg=296.62, stdev=154.79 00:09:41.618 lat (usec): min=200, max=1287, avg=309.73, stdev=158.39 00:09:41.618 clat percentiles (usec): 00:09:41.618 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:09:41.618 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 253], 00:09:41.618 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 498], 95.00th=[ 685], 00:09:41.618 | 99.00th=[ 979], 99.50th=[ 1074], 99.90th=[ 1205], 99.95th=[ 1270], 00:09:41.618 | 99.99th=[ 1270] 00:09:41.618 bw ( KiB/s): min= 6064, max= 6064, per=42.11%, avg=6064.00, stdev= 0.00, samples=1 00:09:41.618 iops : min= 1516, max= 1516, avg=1516.00, stdev= 0.00, samples=1 00:09:41.618 lat (usec) : 250=30.69%, 500=60.32%, 750=7.08%, 1000=1.35% 00:09:41.618 lat (msec) : 2=0.33%, 50=0.23% 00:09:41.618 cpu : usr=2.40%, sys=2.20%, ctx=2148, majf=0, minf=1 00:09:41.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.619 issued rwts: total=1024,1123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.619 00:09:41.619 Run status group 0 (all jobs): 00:09:41.619 READ: bw=8755KiB/s (8965kB/s), 74.4KiB/s-4598KiB/s (76.1kB/s-4709kB/s), io=8956KiB (9171kB), run=1001-1023msec 00:09:41.619 WRITE: bw=14.1MiB/s (14.7MB/s), 2002KiB/s-6006KiB/s (2050kB/s-6150kB/s), io=14.4MiB (15.1MB), run=1001-1023msec 00:09:41.619 00:09:41.619 Disk stats (read/write): 00:09:41.619 nvme0n1: ios=1076/1536, merge=0/0, ticks=477/355, in_queue=832, util=87.07% 00:09:41.619 nvme0n2: ios=39/512, merge=0/0, ticks=595/226, in_queue=821, util=86.98% 00:09:41.619 nvme0n3: ios=62/512, merge=0/0, ticks=947/189, in_queue=1136, util=97.17% 00:09:41.619 nvme0n4: ios=856/1024, merge=0/0, ticks=1410/291, in_queue=1701, util=97.89% 00:09:41.619 02:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:41.619 [global] 00:09:41.619 thread=1 00:09:41.619 invalidate=1 00:09:41.619 rw=write 00:09:41.619 time_based=1 00:09:41.619 runtime=1 00:09:41.619 ioengine=libaio 00:09:41.619 direct=1 00:09:41.619 bs=4096 00:09:41.619 iodepth=128 00:09:41.619 norandommap=0 00:09:41.619 numjobs=1 00:09:41.619 00:09:41.619 verify_dump=1 00:09:41.619 verify_backlog=512 00:09:41.619 verify_state_save=0 00:09:41.619 do_verify=1 00:09:41.619 verify=crc32c-intel 00:09:41.619 [job0] 00:09:41.619 filename=/dev/nvme0n1 00:09:41.619 [job1] 00:09:41.619 filename=/dev/nvme0n2 00:09:41.619 [job2] 00:09:41.619 filename=/dev/nvme0n3 00:09:41.619 [job3] 00:09:41.619 filename=/dev/nvme0n4 00:09:41.619 Could not set queue depth (nvme0n1) 00:09:41.619 Could not set queue depth (nvme0n2) 00:09:41.619 Could not set queue depth (nvme0n3) 00:09:41.619 Could not set queue depth (nvme0n4) 00:09:41.878 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.878 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.878 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.878 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.878 fio-3.35 00:09:41.878 Starting 4 threads 00:09:43.258 00:09:43.258 job0: (groupid=0, jobs=1): err= 0: pid=953679: Sat Jul 27 02:09:11 2024 00:09:43.258 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:43.258 slat (usec): min=2, max=11528, avg=102.16, stdev=542.53 00:09:43.258 clat (usec): min=8162, max=26860, avg=13859.03, stdev=2859.70 00:09:43.258 lat (usec): min=8168, max=29023, avg=13961.19, stdev=2860.74 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11076], 20.00th=[11600], 00:09:43.258 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[14222], 00:09:43.258 | 70.00th=[14746], 80.00th=[15401], 90.00th=[17433], 95.00th=[19792], 00:09:43.258 | 99.00th=[23200], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:09:43.258 | 99.99th=[26870] 00:09:43.258 write: IOPS=5040, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1001msec); 0 zone resets 00:09:43.258 slat (usec): min=3, max=9550, avg=94.26, stdev=519.08 00:09:43.258 clat (usec): min=750, max=25078, avg=12440.12, stdev=2935.63 00:09:43.258 lat (usec): min=768, max=25097, avg=12534.38, stdev=2940.97 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[ 5932], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:09:43.258 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12911], 00:09:43.258 | 70.00th=[13435], 80.00th=[13829], 90.00th=[16188], 95.00th=[17695], 00:09:43.258 | 99.00th=[22414], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:09:43.258 | 99.99th=[25035] 00:09:43.258 bw ( KiB/s): min=20480, max=20480, per=29.37%, avg=20480.00, stdev= 0.00, samples=1 00:09:43.258 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:43.258 lat (usec) : 1000=0.08% 00:09:43.258 lat (msec) : 4=0.36%, 10=9.45%, 20=87.31%, 50=2.80% 00:09:43.258 cpu : usr=7.20%, sys=9.70%, ctx=435, majf=0, minf=13 00:09:43.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:43.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.258 issued rwts: total=4608,5046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.258 job1: (groupid=0, jobs=1): err= 0: pid=953680: Sat Jul 27 02:09:11 2024 00:09:43.258 read: IOPS=4911, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:09:43.258 slat (usec): min=3, max=13433, avg=97.25, stdev=468.36 00:09:43.258 clat (usec): min=703, max=31750, avg=12912.48, stdev=3274.93 00:09:43.258 lat (usec): min=1687, max=31766, avg=13009.73, stdev=3271.82 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[ 4883], 5.00th=[10028], 10.00th=[10552], 20.00th=[11469], 00:09:43.258 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:09:43.258 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15008], 95.00th=[16909], 00:09:43.258 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:09:43.258 | 99.99th=[31851] 00:09:43.258 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:43.258 slat (usec): min=4, max=5589, avg=89.84, stdev=376.58 00:09:43.258 clat (usec): min=7907, max=33778, avg=12252.01, stdev=2977.65 00:09:43.258 lat (usec): min=7925, max=33800, avg=12341.85, stdev=2983.29 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:09:43.258 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:09:43.258 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13829], 95.00th=[15533], 00:09:43.258 | 99.00th=[28705], 99.50th=[28967], 99.90th=[33817], 99.95th=[33817], 00:09:43.258 | 99.99th=[33817] 00:09:43.258 bw ( KiB/s): min=20232, max=20728, per=29.37%, avg=20480.00, stdev=350.72, samples=2 00:09:43.258 iops : min= 5058, max= 5182, avg=5120.00, stdev=87.68, samples=2 00:09:43.258 lat (usec) : 750=0.01% 00:09:43.258 lat (msec) : 2=0.14%, 4=0.18%, 10=6.88%, 20=89.71%, 50=3.08% 00:09:43.258 cpu : usr=8.19%, sys=11.79%, ctx=588, majf=0, minf=7 00:09:43.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:43.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.258 issued rwts: total=4921,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.258 job2: (groupid=0, jobs=1): err= 0: pid=953681: Sat Jul 27 02:09:11 2024 00:09:43.258 read: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1012msec) 00:09:43.258 slat (usec): min=2, max=11984, avg=115.06, stdev=817.02 00:09:43.258 clat (usec): min=4476, max=39326, avg=15061.07, stdev=3733.54 00:09:43.258 lat (usec): min=6625, max=39332, avg=15176.13, stdev=3795.89 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[10159], 5.00th=[11469], 10.00th=[12125], 20.00th=[12649], 00:09:43.258 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13960], 60.00th=[14615], 00:09:43.258 | 70.00th=[15533], 80.00th=[17695], 90.00th=[20055], 95.00th=[21627], 00:09:43.258 | 99.00th=[29230], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:09:43.258 | 99.99th=[39584] 00:09:43.258 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:09:43.258 slat (usec): min=4, max=11899, avg=103.08, stdev=653.46 00:09:43.258 clat (usec): min=1570, max=56567, avg=14108.29, stdev=7761.91 00:09:43.258 lat (usec): min=1583, max=56583, avg=14211.36, stdev=7799.00 00:09:43.258 clat percentiles (usec): 00:09:43.258 | 1.00th=[ 4686], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8455], 00:09:43.258 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13173], 60.00th=[13566], 00:09:43.258 | 70.00th=[13960], 80.00th=[15795], 90.00th=[19006], 95.00th=[30802], 00:09:43.258 | 99.00th=[51119], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:09:43.258 | 99.99th=[56361] 00:09:43.258 bw ( KiB/s): min=16184, max=20480, per=26.29%, avg=18332.00, stdev=3037.73, samples=2 00:09:43.259 iops : min= 4046, max= 5120, avg=4583.00, stdev=759.43, samples=2 00:09:43.259 lat (msec) : 2=0.07%, 4=0.31%, 10=12.27%, 20=78.60%, 50=7.99% 00:09:43.259 lat (msec) : 100=0.76% 00:09:43.259 cpu : usr=5.34%, sys=10.39%, ctx=344, majf=0, minf=17 00:09:43.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:43.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.259 issued rwts: total=4199,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.259 job3: (groupid=0, jobs=1): err= 0: pid=953682: Sat Jul 27 02:09:11 2024 00:09:43.259 read: IOPS=3113, BW=12.2MiB/s (12.8MB/s)(12.8MiB/1053msec) 00:09:43.259 slat (usec): min=2, max=31101, avg=152.45, stdev=1282.89 00:09:43.259 clat (usec): min=5896, max=93619, avg=21163.03, stdev=14819.11 00:09:43.259 lat (usec): min=7546, max=93735, avg=21315.49, stdev=14884.98 00:09:43.259 clat percentiles (usec): 00:09:43.259 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[12256], 20.00th=[13304], 00:09:43.259 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16319], 60.00th=[17695], 00:09:43.259 | 70.00th=[18482], 80.00th=[22414], 90.00th=[40109], 95.00th=[53740], 00:09:43.259 | 99.00th=[92799], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:09:43.259 | 99.99th=[93848] 00:09:43.259 write: IOPS=3403, BW=13.3MiB/s (13.9MB/s)(14.0MiB/1053msec); 0 zone resets 00:09:43.259 slat (usec): min=3, max=30574, avg=127.26, stdev=1032.85 00:09:43.259 clat (usec): min=1963, max=62234, avg=17774.76, stdev=9560.02 00:09:43.259 lat (usec): min=1978, max=62248, avg=17902.03, stdev=9615.40 00:09:43.259 clat percentiles (usec): 00:09:43.259 | 1.00th=[ 7373], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[12911], 00:09:43.259 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14877], 60.00th=[15664], 00:09:43.259 | 70.00th=[17171], 80.00th=[19268], 90.00th=[29230], 95.00th=[41681], 00:09:43.259 | 99.00th=[55313], 99.50th=[60031], 99.90th=[62129], 99.95th=[62129], 00:09:43.259 | 99.99th=[62129] 00:09:43.259 bw ( KiB/s): min=12288, max=16384, per=20.56%, avg=14336.00, stdev=2896.31, samples=2 00:09:43.259 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:43.259 lat (msec) : 2=0.03%, 4=0.12%, 10=5.70%, 20=72.50%, 50=16.65% 00:09:43.259 lat (msec) : 100=5.00% 00:09:43.259 cpu : usr=2.47%, sys=5.42%, ctx=255, majf=0, minf=15 00:09:43.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:43.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.259 issued rwts: total=3279,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.259 00:09:43.259 Run status group 0 (all jobs): 00:09:43.259 READ: bw=63.1MiB/s (66.2MB/s), 12.2MiB/s-19.2MiB/s (12.8MB/s-20.1MB/s), io=66.4MiB (69.7MB), run=1001-1053msec 00:09:43.259 WRITE: bw=68.1MiB/s (71.4MB/s), 13.3MiB/s-20.0MiB/s (13.9MB/s-20.9MB/s), io=71.7MiB (75.2MB), run=1001-1053msec 00:09:43.259 00:09:43.259 Disk stats (read/write): 00:09:43.259 nvme0n1: ios=3793/4096, merge=0/0, ticks=17263/18819, in_queue=36082, util=98.00% 00:09:43.259 nvme0n2: ios=3876/4096, merge=0/0, ticks=12831/11783, in_queue=24614, util=87.46% 00:09:43.259 nvme0n3: ios=3411/3584, merge=0/0, ticks=48572/50862, in_queue=99434, util=98.38% 00:09:43.259 nvme0n4: ios=2643/3072, merge=0/0, ticks=27238/26856, in_queue=54094, util=98.79% 00:09:43.259 02:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:43.259 [global] 00:09:43.259 thread=1 00:09:43.259 invalidate=1 00:09:43.259 rw=randwrite 00:09:43.259 time_based=1 00:09:43.259 runtime=1 00:09:43.259 ioengine=libaio 00:09:43.259 direct=1 00:09:43.259 bs=4096 00:09:43.259 iodepth=128 00:09:43.259 norandommap=0 00:09:43.259 numjobs=1 00:09:43.259 00:09:43.259 verify_dump=1 00:09:43.259 verify_backlog=512 00:09:43.259 verify_state_save=0 00:09:43.259 do_verify=1 00:09:43.259 verify=crc32c-intel 00:09:43.259 [job0] 00:09:43.259 filename=/dev/nvme0n1 00:09:43.259 [job1] 00:09:43.259 filename=/dev/nvme0n2 00:09:43.259 [job2] 00:09:43.259 filename=/dev/nvme0n3 00:09:43.259 [job3] 00:09:43.259 filename=/dev/nvme0n4 00:09:43.259 Could not set queue depth (nvme0n1) 00:09:43.259 Could not set queue depth (nvme0n2) 00:09:43.259 Could not set queue depth (nvme0n3) 00:09:43.259 Could not set queue depth (nvme0n4) 00:09:43.259 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.259 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.259 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.259 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.259 fio-3.35 00:09:43.259 Starting 4 threads 00:09:44.639 00:09:44.639 job0: (groupid=0, jobs=1): err= 0: pid=953912: Sat Jul 27 02:09:12 2024 00:09:44.639 read: IOPS=3435, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1007msec) 00:09:44.639 slat (usec): min=3, max=13143, avg=120.10, stdev=753.13 00:09:44.639 clat (usec): min=2866, max=66714, avg=16236.45, stdev=8485.73 00:09:44.639 lat (usec): min=7707, max=66723, avg=16356.55, stdev=8500.16 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11469], 00:09:44.639 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13829], 60.00th=[15008], 00:09:44.639 | 70.00th=[17171], 80.00th=[19006], 90.00th=[25035], 95.00th=[28967], 00:09:44.639 | 99.00th=[62653], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:09:44.639 | 99.99th=[66847] 00:09:44.639 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:09:44.639 slat (usec): min=3, max=49085, avg=152.40, stdev=1296.50 00:09:44.639 clat (usec): min=6410, max=64873, avg=19327.04, stdev=10247.02 00:09:44.639 lat (usec): min=6429, max=64881, avg=19479.44, stdev=10343.53 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[10945], 20.00th=[11338], 00:09:44.639 | 30.00th=[11994], 40.00th=[12780], 50.00th=[16581], 60.00th=[19268], 00:09:44.639 | 70.00th=[21365], 80.00th=[26346], 90.00th=[33424], 95.00th=[39584], 00:09:44.639 | 99.00th=[55837], 99.50th=[59507], 99.90th=[64750], 99.95th=[64750], 00:09:44.639 | 99.99th=[64750] 00:09:44.639 bw ( KiB/s): min=13240, max=15432, per=23.16%, avg=14336.00, stdev=1549.98, samples=2 00:09:44.639 iops : min= 3310, max= 3858, avg=3584.00, stdev=387.49, samples=2 00:09:44.639 lat (msec) : 4=0.01%, 10=5.37%, 20=70.44%, 50=22.36%, 100=1.82% 00:09:44.639 cpu : usr=5.67%, sys=7.65%, ctx=241, majf=0, minf=1 00:09:44.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:44.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.639 issued rwts: total=3460,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.639 job1: (groupid=0, jobs=1): err= 0: pid=953913: Sat Jul 27 02:09:12 2024 00:09:44.639 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:09:44.639 slat (usec): min=2, max=12227, avg=103.97, stdev=734.37 00:09:44.639 clat (usec): min=5991, max=39700, avg=13501.73, stdev=4024.88 00:09:44.639 lat (usec): min=6010, max=39716, avg=13605.69, stdev=4084.88 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10552], 00:09:44.639 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:09:44.639 | 70.00th=[14091], 80.00th=[15401], 90.00th=[17695], 95.00th=[19792], 00:09:44.639 | 99.00th=[31589], 99.50th=[32375], 99.90th=[39584], 99.95th=[39584], 00:09:44.639 | 99.99th=[39584] 00:09:44.639 write: IOPS=5003, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1010msec); 0 zone resets 00:09:44.639 slat (usec): min=4, max=9605, avg=90.45, stdev=517.29 00:09:44.639 clat (usec): min=3481, max=39686, avg=12956.27, stdev=5782.48 00:09:44.639 lat (usec): min=3488, max=39694, avg=13046.72, stdev=5814.38 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 5145], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 8291], 00:09:44.639 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11469], 60.00th=[13173], 00:09:44.639 | 70.00th=[14484], 80.00th=[16188], 90.00th=[21103], 95.00th=[24511], 00:09:44.639 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:09:44.639 | 99.99th=[39584] 00:09:44.639 bw ( KiB/s): min=19656, max=19760, per=31.84%, avg=19708.00, stdev=73.54, samples=2 00:09:44.639 iops : min= 4914, max= 4940, avg=4927.00, stdev=18.38, samples=2 00:09:44.639 lat (msec) : 4=0.06%, 10=23.95%, 20=66.16%, 50=9.83% 00:09:44.639 cpu : usr=7.33%, sys=10.21%, ctx=381, majf=0, minf=1 00:09:44.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:44.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.639 issued rwts: total=4608,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.639 job2: (groupid=0, jobs=1): err= 0: pid=953916: Sat Jul 27 02:09:12 2024 00:09:44.639 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:44.639 slat (usec): min=2, max=22515, avg=154.11, stdev=986.58 00:09:44.639 clat (usec): min=8794, max=95165, avg=19055.16, stdev=15091.54 00:09:44.639 lat (usec): min=8803, max=95186, avg=19209.27, stdev=15191.53 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11469], 20.00th=[12125], 00:09:44.639 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[14353], 00:09:44.639 | 70.00th=[16712], 80.00th=[21103], 90.00th=[28443], 95.00th=[55313], 00:09:44.639 | 99.00th=[93848], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:09:44.639 | 99.99th=[94897] 00:09:44.639 write: IOPS=4062, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1003msec); 0 zone resets 00:09:44.639 slat (usec): min=3, max=10429, avg=101.99, stdev=554.91 00:09:44.639 clat (usec): min=2119, max=86615, avg=14336.50, stdev=8120.32 00:09:44.639 lat (usec): min=2125, max=86624, avg=14438.49, stdev=8133.07 00:09:44.639 clat percentiles (usec): 00:09:44.639 | 1.00th=[ 6128], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11600], 00:09:44.640 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:09:44.640 | 70.00th=[13960], 80.00th=[15139], 90.00th=[16188], 95.00th=[21890], 00:09:44.640 | 99.00th=[62129], 99.50th=[79168], 99.90th=[86508], 99.95th=[86508], 00:09:44.640 | 99.99th=[86508] 00:09:44.640 bw ( KiB/s): min=15320, max=16264, per=25.52%, avg=15792.00, stdev=667.51, samples=2 00:09:44.640 iops : min= 3830, max= 4066, avg=3948.00, stdev=166.88, samples=2 00:09:44.640 lat (msec) : 4=0.40%, 10=6.29%, 20=79.11%, 50=10.54%, 100=3.66% 00:09:44.640 cpu : usr=3.39%, sys=6.39%, ctx=359, majf=0, minf=1 00:09:44.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:44.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.640 issued rwts: total=3584,4075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.640 job3: (groupid=0, jobs=1): err= 0: pid=953917: Sat Jul 27 02:09:12 2024 00:09:44.640 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:44.640 slat (usec): min=2, max=12090, avg=170.56, stdev=901.84 00:09:44.640 clat (usec): min=10182, max=44413, avg=21150.24, stdev=8019.35 00:09:44.640 lat (usec): min=10189, max=44464, avg=21320.80, stdev=8088.39 00:09:44.640 clat percentiles (usec): 00:09:44.640 | 1.00th=[10683], 5.00th=[12125], 10.00th=[13566], 20.00th=[13698], 00:09:44.640 | 30.00th=[14615], 40.00th=[16319], 50.00th=[17433], 60.00th=[22676], 00:09:44.640 | 70.00th=[27657], 80.00th=[30016], 90.00th=[32637], 95.00th=[34341], 00:09:44.640 | 99.00th=[38011], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:09:44.640 | 99.99th=[44303] 00:09:44.640 write: IOPS=2896, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec); 0 zone resets 00:09:44.640 slat (usec): min=3, max=33993, avg=187.12, stdev=1186.94 00:09:44.640 clat (usec): min=2086, max=69398, avg=25023.57, stdev=9915.57 00:09:44.640 lat (usec): min=8460, max=69483, avg=25210.69, stdev=9996.73 00:09:44.640 clat percentiles (usec): 00:09:44.640 | 1.00th=[ 9765], 5.00th=[12256], 10.00th=[13435], 20.00th=[14746], 00:09:44.640 | 30.00th=[17957], 40.00th=[20579], 50.00th=[26084], 60.00th=[28443], 00:09:44.640 | 70.00th=[29754], 80.00th=[32900], 90.00th=[35914], 95.00th=[41681], 00:09:44.640 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:09:44.640 | 99.99th=[69731] 00:09:44.640 bw ( KiB/s): min=10000, max=12288, per=18.01%, avg=11144.00, stdev=1617.86, samples=2 00:09:44.640 iops : min= 2500, max= 3072, avg=2786.00, stdev=404.47, samples=2 00:09:44.640 lat (msec) : 4=0.02%, 10=1.04%, 20=46.22%, 50=51.59%, 100=1.13% 00:09:44.640 cpu : usr=1.99%, sys=4.38%, ctx=269, majf=0, minf=1 00:09:44.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:44.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.640 issued rwts: total=2560,2914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.640 00:09:44.640 Run status group 0 (all jobs): 00:09:44.640 READ: bw=55.0MiB/s (57.6MB/s), 9.94MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=55.5MiB (58.2MB), run=1003-1010msec 00:09:44.640 WRITE: bw=60.4MiB/s (63.4MB/s), 11.3MiB/s-19.5MiB/s (11.9MB/s-20.5MB/s), io=61.0MiB (64.0MB), run=1003-1010msec 00:09:44.640 00:09:44.640 Disk stats (read/write): 00:09:44.640 nvme0n1: ios=2585/3025, merge=0/0, ticks=20431/28400, in_queue=48831, util=97.29% 00:09:44.640 nvme0n2: ios=4002/4096, merge=0/0, ticks=50102/49230, in_queue=99332, util=98.88% 00:09:44.640 nvme0n3: ios=3124/3152, merge=0/0, ticks=19740/12668, in_queue=32408, util=98.02% 00:09:44.640 nvme0n4: ios=2090/2401, merge=0/0, ticks=14123/19467, in_queue=33590, util=94.85% 00:09:44.640 02:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:44.640 02:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=954053 00:09:44.640 02:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:44.640 02:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:44.640 [global] 00:09:44.640 thread=1 00:09:44.640 invalidate=1 00:09:44.640 rw=read 00:09:44.640 time_based=1 00:09:44.640 runtime=10 00:09:44.640 ioengine=libaio 00:09:44.640 direct=1 00:09:44.640 bs=4096 00:09:44.640 iodepth=1 00:09:44.640 norandommap=1 00:09:44.640 numjobs=1 00:09:44.640 00:09:44.640 [job0] 00:09:44.640 filename=/dev/nvme0n1 00:09:44.640 [job1] 00:09:44.640 filename=/dev/nvme0n2 00:09:44.640 [job2] 00:09:44.640 filename=/dev/nvme0n3 00:09:44.640 [job3] 00:09:44.640 filename=/dev/nvme0n4 00:09:44.640 Could not set queue depth (nvme0n1) 00:09:44.640 Could not set queue depth (nvme0n2) 00:09:44.640 Could not set queue depth (nvme0n3) 00:09:44.640 Could not set queue depth (nvme0n4) 00:09:44.898 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.898 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.899 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.899 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.899 fio-3.35 00:09:44.899 Starting 4 threads 00:09:47.463 02:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:48.029 02:09:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:48.029 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9617408, buflen=4096 00:09:48.029 fio: pid=954246, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:48.029 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.029 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:48.029 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:09:48.029 fio: pid=954233, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:48.595 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.595 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:48.595 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=29270016, buflen=4096 00:09:48.595 fio: pid=954177, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:48.595 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=37089280, buflen=4096 00:09:48.595 fio: pid=954198, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:48.595 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.595 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:48.854 00:09:48.854 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=954177: Sat Jul 27 02:09:16 2024 00:09:48.854 read: IOPS=2063, BW=8252KiB/s (8450kB/s)(27.9MiB/3464msec) 00:09:48.854 slat (usec): min=4, max=15514, avg=16.70, stdev=244.41 00:09:48.854 clat (usec): min=295, max=41329, avg=461.52, stdev=1179.56 00:09:48.854 lat (usec): min=300, max=41338, avg=478.21, stdev=1205.78 00:09:48.854 clat percentiles (usec): 00:09:48.854 | 1.00th=[ 310], 5.00th=[ 338], 10.00th=[ 383], 20.00th=[ 404], 00:09:48.854 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 424], 60.00th=[ 433], 00:09:48.854 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 486], 00:09:48.854 | 99.00th=[ 562], 99.50th=[ 807], 99.90th=[ 2147], 99.95th=[41157], 00:09:48.854 | 99.99th=[41157] 00:09:48.854 bw ( KiB/s): min= 8528, max= 9736, per=44.58%, avg=8920.00, stdev=434.86, samples=6 00:09:48.854 iops : min= 2132, max= 2434, avg=2230.00, stdev=108.72, samples=6 00:09:48.854 lat (usec) : 500=96.73%, 750=2.74%, 1000=0.21% 00:09:48.854 lat (msec) : 2=0.20%, 4=0.01%, 10=0.01%, 50=0.08% 00:09:48.854 cpu : usr=1.41%, sys=4.13%, ctx=7149, majf=0, minf=1 00:09:48.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 issued rwts: total=7147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.854 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=954198: Sat Jul 27 02:09:16 2024 00:09:48.854 read: IOPS=2431, BW=9726KiB/s (9960kB/s)(35.4MiB/3724msec) 00:09:48.854 slat (usec): min=4, max=30380, avg=21.51, stdev=466.98 00:09:48.854 clat (usec): min=292, max=46205, avg=384.02, stdev=776.11 00:09:48.854 lat (usec): min=301, max=46217, avg=405.53, stdev=906.55 00:09:48.854 clat percentiles (usec): 00:09:48.854 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:09:48.854 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:09:48.854 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 449], 00:09:48.854 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 930], 00:09:48.854 | 99.99th=[46400] 00:09:48.854 bw ( KiB/s): min= 6225, max=11152, per=49.24%, avg=9852.71, stdev=1702.78, samples=7 00:09:48.854 iops : min= 1556, max= 2788, avg=2463.14, stdev=425.78, samples=7 00:09:48.854 lat (usec) : 500=98.29%, 750=1.65%, 1000=0.01% 00:09:48.854 lat (msec) : 2=0.01%, 50=0.03% 00:09:48.854 cpu : usr=2.01%, sys=4.06%, ctx=9063, majf=0, minf=1 00:09:48.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 issued rwts: total=9056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.854 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=954233: Sat Jul 27 02:09:16 2024 00:09:48.854 read: IOPS=24, BW=98.0KiB/s (100kB/s)(312KiB/3184msec) 00:09:48.854 slat (nsec): min=12651, max=35384, avg=19045.66, stdev=5832.33 00:09:48.854 clat (usec): min=681, max=43991, avg=40512.59, stdev=4582.82 00:09:48.854 lat (usec): min=706, max=44008, avg=40531.69, stdev=4582.10 00:09:48.854 clat percentiles (usec): 00:09:48.854 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:48.854 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:48.854 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:48.854 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:48.854 | 99.99th=[43779] 00:09:48.854 bw ( KiB/s): min= 96, max= 104, per=0.49%, avg=98.67, stdev= 4.13, samples=6 00:09:48.854 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:09:48.854 lat (usec) : 750=1.27% 00:09:48.854 lat (msec) : 50=97.47% 00:09:48.854 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=1 00:09:48.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.854 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.854 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=954246: Sat Jul 27 02:09:16 2024 00:09:48.854 read: IOPS=809, BW=3235KiB/s (3313kB/s)(9392KiB/2903msec) 00:09:48.854 slat (nsec): min=6144, max=82615, avg=15248.81, stdev=7089.17 00:09:48.854 clat (usec): min=331, max=41951, avg=1206.62, stdev=5501.43 00:09:48.854 lat (usec): min=339, max=41989, avg=1221.86, stdev=5502.44 00:09:48.854 clat percentiles (usec): 00:09:48.854 | 1.00th=[ 343], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 404], 00:09:48.854 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 453], 00:09:48.854 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 523], 95.00th=[ 570], 00:09:48.854 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:09:48.854 | 99.99th=[42206] 00:09:48.854 bw ( KiB/s): min= 120, max= 8720, per=18.69%, avg=3739.20, stdev=4220.08, samples=5 00:09:48.854 iops : min= 30, max= 2180, avg=934.80, stdev=1055.02, samples=5 00:09:48.854 lat (usec) : 500=83.23%, 750=14.64%, 1000=0.13% 00:09:48.855 lat (msec) : 2=0.09%, 50=1.87% 00:09:48.855 cpu : usr=0.59%, sys=2.00%, ctx=2349, majf=0, minf=1 00:09:48.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.855 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.855 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.855 00:09:48.855 Run status group 0 (all jobs): 00:09:48.855 READ: bw=19.5MiB/s (20.5MB/s), 98.0KiB/s-9726KiB/s (100kB/s-9960kB/s), io=72.8MiB (76.3MB), run=2903-3724msec 00:09:48.855 00:09:48.855 Disk stats (read/write): 00:09:48.855 nvme0n1: ios=7143/0, merge=0/0, ticks=3101/0, in_queue=3101, util=94.99% 00:09:48.855 nvme0n2: ios=8769/0, merge=0/0, ticks=3326/0, in_queue=3326, util=94.16% 00:09:48.855 nvme0n3: ios=76/0, merge=0/0, ticks=3080/0, in_queue=3080, util=96.72% 00:09:48.855 nvme0n4: ios=2347/0, merge=0/0, ticks=2784/0, in_queue=2784, util=96.71% 00:09:48.855 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.855 02:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:49.113 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.113 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:49.371 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.371 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:49.629 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.629 02:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:49.887 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:49.887 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 954053 00:09:49.887 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:49.887 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:50.146 nvmf hotplug test: fio failed as expected 00:09:50.146 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.404 rmmod nvme_tcp 00:09:50.404 rmmod nvme_fabrics 00:09:50.404 rmmod nvme_keyring 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.404 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 951519 ']' 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 951519 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 951519 ']' 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 951519 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 951519 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 951519' 00:09:50.405 killing process with pid 951519 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 951519 00:09:50.405 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 951519 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.664 02:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.199 00:09:53.199 real 0m23.450s 00:09:53.199 user 1m22.001s 00:09:53.199 sys 0m7.008s 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.199 ************************************ 00:09:53.199 END TEST nvmf_fio_target 00:09:53.199 ************************************ 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.199 02:09:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.199 ************************************ 00:09:53.200 START TEST nvmf_bdevio 00:09:53.200 ************************************ 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.200 * Looking for test storage... 00:09:53.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.200 02:09:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.575 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.576 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:09:54.834 00:09:54.834 --- 10.0.0.2 ping statistics --- 00:09:54.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.834 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:54.834 00:09:54.834 --- 10.0.0.1 ping statistics --- 00:09:54.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.834 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=956776 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 956776 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 956776 ']' 00:09:54.834 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.835 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.835 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.835 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.835 02:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.835 [2024-07-27 02:09:22.939250] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:09:54.835 [2024-07-27 02:09:22.939325] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.835 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.835 [2024-07-27 02:09:22.978461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:55.093 [2024-07-27 02:09:23.011663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.093 [2024-07-27 02:09:23.106641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.093 [2024-07-27 02:09:23.106705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.093 [2024-07-27 02:09:23.106731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.093 [2024-07-27 02:09:23.106744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.093 [2024-07-27 02:09:23.106757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.093 [2024-07-27 02:09:23.106850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.093 [2024-07-27 02:09:23.106905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.093 [2024-07-27 02:09:23.106956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.093 [2024-07-27 02:09:23.106959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.093 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.093 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:55.093 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.093 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.093 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 [2024-07-27 02:09:23.267583] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 Malloc0 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 [2024-07-27 02:09:23.321093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.351 { 00:09:55.351 "params": { 00:09:55.351 "name": "Nvme$subsystem", 00:09:55.351 "trtype": "$TEST_TRANSPORT", 00:09:55.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.351 "adrfam": "ipv4", 00:09:55.351 "trsvcid": "$NVMF_PORT", 00:09:55.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.351 "hdgst": ${hdgst:-false}, 00:09:55.351 "ddgst": ${ddgst:-false} 00:09:55.351 }, 00:09:55.351 "method": "bdev_nvme_attach_controller" 00:09:55.351 } 00:09:55.351 EOF 00:09:55.351 )") 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:55.351 02:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.351 "params": { 00:09:55.351 "name": "Nvme1", 00:09:55.351 "trtype": "tcp", 00:09:55.351 "traddr": "10.0.0.2", 00:09:55.351 "adrfam": "ipv4", 00:09:55.351 "trsvcid": "4420", 00:09:55.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.351 "hdgst": false, 00:09:55.351 "ddgst": false 00:09:55.351 }, 00:09:55.351 "method": "bdev_nvme_attach_controller" 00:09:55.351 }' 00:09:55.351 [2024-07-27 02:09:23.368220] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:09:55.351 [2024-07-27 02:09:23.368301] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956920 ] 00:09:55.351 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.351 [2024-07-27 02:09:23.402889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:55.351 [2024-07-27 02:09:23.432696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.608 [2024-07-27 02:09:23.521827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.608 [2024-07-27 02:09:23.521876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.608 [2024-07-27 02:09:23.521879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.608 I/O targets: 00:09:55.608 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.608 00:09:55.608 00:09:55.608 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.608 http://cunit.sourceforge.net/ 00:09:55.608 00:09:55.608 00:09:55.608 Suite: bdevio tests on: Nvme1n1 00:09:55.865 Test: blockdev write read block ...passed 00:09:55.865 Test: blockdev write zeroes read block ...passed 00:09:55.865 Test: blockdev write zeroes read no split ...passed 00:09:55.865 Test: blockdev write zeroes read split ...passed 00:09:55.865 Test: blockdev write zeroes read split partial ...passed 00:09:55.865 Test: blockdev reset ...[2024-07-27 02:09:23.940949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:55.865 [2024-07-27 02:09:23.941069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015940 (9): Bad file descriptor 00:09:55.865 [2024-07-27 02:09:23.993221] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.865 passed 00:09:55.865 Test: blockdev write read 8 blocks ...passed 00:09:55.865 Test: blockdev write read size > 128k ...passed 00:09:55.865 Test: blockdev write read invalid size ...passed 00:09:56.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:56.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:56.122 Test: blockdev write read max offset ...passed 00:09:56.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:56.122 Test: blockdev writev readv 8 blocks ...passed 00:09:56.122 Test: blockdev writev readv 30 x 1block ...passed 00:09:56.122 Test: blockdev writev readv block ...passed 00:09:56.122 Test: blockdev writev readv size > 128k ...passed 00:09:56.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:56.122 Test: blockdev comparev and writev ...[2024-07-27 02:09:24.250539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.250573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.250598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.250614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:56.122 [2024-07-27 02:09:24.251943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.122 [2024-07-27 02:09:24.251967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:56.380 passed 00:09:56.380 Test: blockdev nvme passthru rw ...passed 00:09:56.380 Test: blockdev nvme passthru vendor specific ...[2024-07-27 02:09:24.335455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.380 [2024-07-27 02:09:24.335482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:56.380 [2024-07-27 02:09:24.335693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.380 [2024-07-27 02:09:24.335716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:56.380 [2024-07-27 02:09:24.335921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.380 [2024-07-27 02:09:24.335944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:56.380 [2024-07-27 02:09:24.336153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.380 [2024-07-27 02:09:24.336176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:56.380 passed 00:09:56.380 Test: blockdev nvme admin passthru ...passed 00:09:56.380 Test: blockdev copy ...passed 00:09:56.380 00:09:56.380 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.380 suites 1 1 n/a 0 0 00:09:56.380 tests 23 23 23 0 0 00:09:56.380 asserts 152 152 152 0 n/a 00:09:56.380 00:09:56.381 Elapsed time = 1.319 seconds 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.638 rmmod nvme_tcp 00:09:56.638 rmmod nvme_fabrics 00:09:56.638 rmmod nvme_keyring 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 956776 ']' 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 956776 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 956776 ']' 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 956776 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 956776 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 956776' 00:09:56.638 killing process with pid 956776 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 956776 00:09:56.638 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 956776 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.897 02:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.430 02:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.430 00:09:59.430 real 0m6.159s 00:09:59.430 user 0m10.154s 00:09:59.430 sys 0m2.018s 00:09:59.430 02:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.430 02:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.430 ************************************ 00:09:59.430 END TEST nvmf_bdevio 00:09:59.430 ************************************ 00:09:59.430 02:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:59.430 00:09:59.430 real 3m49.481s 00:09:59.430 user 9m46.540s 00:09:59.430 sys 1m10.305s 00:09:59.430 02:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.430 02:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.430 ************************************ 00:09:59.430 END TEST nvmf_target_core 00:09:59.430 ************************************ 00:09:59.430 02:09:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.430 02:09:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.430 02:09:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.430 02:09:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.430 ************************************ 00:09:59.430 START TEST nvmf_target_extra 00:09:59.430 ************************************ 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.431 * Looking for test storage... 00:09:59.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.431 ************************************ 00:09:59.431 START TEST nvmf_example 00:09:59.431 ************************************ 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.431 * Looking for test storage... 00:09:59.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.431 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.432 02:09:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.335 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:01.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:01.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:01.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:01.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:01.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:10:01.336 00:10:01.336 --- 10.0.0.2 ping statistics --- 00:10:01.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.336 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:01.336 00:10:01.336 --- 10.0.0.1 ping statistics --- 00:10:01.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.336 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=959045 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 959045 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 959045 ']' 00:10:01.336 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.337 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.337 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.337 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.337 02:09:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.337 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.270 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:02.529 02:09:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:02.529 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.525 Initializing NVMe Controllers 00:10:12.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:12.526 Initialization complete. Launching workers. 00:10:12.526 ======================================================== 00:10:12.526 Latency(us) 00:10:12.526 Device Information : IOPS MiB/s Average min max 00:10:12.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13975.98 54.59 4578.94 902.97 23987.37 00:10:12.526 ======================================================== 00:10:12.526 Total : 13975.98 54.59 4578.94 902.97 23987.37 00:10:12.526 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.526 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.526 rmmod nvme_tcp 00:10:12.783 rmmod nvme_fabrics 00:10:12.783 rmmod nvme_keyring 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 959045 ']' 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 959045 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 959045 ']' 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 959045 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959045 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959045' 00:10:12.783 killing process with pid 959045 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 959045 00:10:12.783 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 959045 00:10:13.042 nvmf threads initialize successfully 00:10:13.042 bdev subsystem init successfully 00:10:13.042 created a nvmf target service 00:10:13.042 create targets's poll groups done 00:10:13.042 all subsystems of target started 00:10:13.042 nvmf target is running 00:10:13.042 all subsystems of target stopped 00:10:13.042 destroy targets's poll groups done 00:10:13.042 destroyed the nvmf target service 00:10:13.042 bdev subsystem finish successfully 00:10:13.042 nvmf threads destroy successfully 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.042 02:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.944 00:10:14.944 real 0m15.914s 00:10:14.944 user 0m44.948s 00:10:14.944 sys 0m3.377s 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.944 ************************************ 00:10:14.944 END TEST nvmf_example 00:10:14.944 ************************************ 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.944 02:09:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:14.944 ************************************ 00:10:14.944 START TEST nvmf_filesystem 00:10:14.944 ************************************ 00:10:14.945 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:15.206 * Looking for test storage... 00:10:15.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:15.206 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:15.207 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:15.208 #define SPDK_CONFIG_H 00:10:15.208 #define SPDK_CONFIG_APPS 1 00:10:15.208 #define SPDK_CONFIG_ARCH native 00:10:15.208 #undef SPDK_CONFIG_ASAN 00:10:15.208 #undef SPDK_CONFIG_AVAHI 00:10:15.208 #undef SPDK_CONFIG_CET 00:10:15.208 #define SPDK_CONFIG_COVERAGE 1 00:10:15.208 #define SPDK_CONFIG_CROSS_PREFIX 00:10:15.208 #undef SPDK_CONFIG_CRYPTO 00:10:15.208 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:15.208 #undef SPDK_CONFIG_CUSTOMOCF 00:10:15.208 #undef SPDK_CONFIG_DAOS 00:10:15.208 #define SPDK_CONFIG_DAOS_DIR 00:10:15.208 #define SPDK_CONFIG_DEBUG 1 00:10:15.208 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:15.208 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:15.208 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:15.208 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:15.208 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:15.208 #undef SPDK_CONFIG_DPDK_UADK 00:10:15.208 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:15.208 #define SPDK_CONFIG_EXAMPLES 1 00:10:15.208 #undef SPDK_CONFIG_FC 00:10:15.208 #define SPDK_CONFIG_FC_PATH 00:10:15.208 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:15.208 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:15.208 #undef SPDK_CONFIG_FUSE 00:10:15.208 #undef SPDK_CONFIG_FUZZER 00:10:15.208 #define SPDK_CONFIG_FUZZER_LIB 00:10:15.208 #undef SPDK_CONFIG_GOLANG 00:10:15.208 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:15.208 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:15.208 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:15.208 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:15.208 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:15.208 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:15.208 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:15.208 #define SPDK_CONFIG_IDXD 1 00:10:15.208 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:15.208 #undef SPDK_CONFIG_IPSEC_MB 00:10:15.208 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:15.208 #define SPDK_CONFIG_ISAL 1 00:10:15.208 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:15.208 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:15.208 #define SPDK_CONFIG_LIBDIR 00:10:15.208 #undef SPDK_CONFIG_LTO 00:10:15.208 #define SPDK_CONFIG_MAX_LCORES 128 00:10:15.208 #define SPDK_CONFIG_NVME_CUSE 1 00:10:15.208 #undef SPDK_CONFIG_OCF 00:10:15.208 #define SPDK_CONFIG_OCF_PATH 00:10:15.208 #define SPDK_CONFIG_OPENSSL_PATH 00:10:15.208 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:15.208 #define SPDK_CONFIG_PGO_DIR 00:10:15.208 #undef SPDK_CONFIG_PGO_USE 00:10:15.208 #define SPDK_CONFIG_PREFIX /usr/local 00:10:15.208 #undef SPDK_CONFIG_RAID5F 00:10:15.208 #undef SPDK_CONFIG_RBD 00:10:15.208 #define SPDK_CONFIG_RDMA 1 00:10:15.208 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:15.208 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:15.208 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:15.208 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:15.208 #define SPDK_CONFIG_SHARED 1 00:10:15.208 #undef SPDK_CONFIG_SMA 00:10:15.208 #define SPDK_CONFIG_TESTS 1 00:10:15.208 #undef SPDK_CONFIG_TSAN 00:10:15.208 #define SPDK_CONFIG_UBLK 1 00:10:15.208 #define SPDK_CONFIG_UBSAN 1 00:10:15.208 #undef SPDK_CONFIG_UNIT_TESTS 00:10:15.208 #undef SPDK_CONFIG_URING 00:10:15.208 #define SPDK_CONFIG_URING_PATH 00:10:15.208 #undef SPDK_CONFIG_URING_ZNS 00:10:15.208 #undef SPDK_CONFIG_USDT 00:10:15.208 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:15.208 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:15.208 #define SPDK_CONFIG_VFIO_USER 1 00:10:15.208 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:15.208 #define SPDK_CONFIG_VHOST 1 00:10:15.208 #define SPDK_CONFIG_VIRTIO 1 00:10:15.208 #undef SPDK_CONFIG_VTUNE 00:10:15.208 #define SPDK_CONFIG_VTUNE_DIR 00:10:15.208 #define SPDK_CONFIG_WERROR 1 00:10:15.208 #define SPDK_CONFIG_WPDK_DIR 00:10:15.208 #undef SPDK_CONFIG_XNVME 00:10:15.208 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:15.208 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:15.209 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:15.210 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 960752 ]] 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 960752 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.XVAlXL 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XVAlXL/tests/target /tmp/spdk.XVAlXL 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=919711744 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4364718080 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=54013407232 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=7981305856 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996185088 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1171456 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:15.211 * Looking for test storage... 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:15.211 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=54013407232 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10195898368 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.212 02:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.746 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:17.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:17.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:17.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:17.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:17.747 00:10:17.747 --- 10.0.0.2 ping statistics --- 00:10:17.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.747 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:17.747 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:10:17.748 00:10:17.748 --- 10.0.0.1 ping statistics --- 00:10:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.748 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.748 ************************************ 00:10:17.748 START TEST nvmf_filesystem_no_in_capsule 00:10:17.748 ************************************ 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=962375 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 962375 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 962375 ']' 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.748 [2024-07-27 02:09:45.628016] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:10:17.748 [2024-07-27 02:09:45.628093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.748 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.748 [2024-07-27 02:09:45.666747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:17.748 [2024-07-27 02:09:45.693826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.748 [2024-07-27 02:09:45.785017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.748 [2024-07-27 02:09:45.785094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.748 [2024-07-27 02:09:45.785109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.748 [2024-07-27 02:09:45.785120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.748 [2024-07-27 02:09:45.785130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.748 [2024-07-27 02:09:45.785210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.748 [2024-07-27 02:09:45.785288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.748 [2024-07-27 02:09:45.785348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.748 [2024-07-27 02:09:45.785350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.748 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 [2024-07-27 02:09:45.935510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.007 02:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 Malloc1 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.007 [2024-07-27 02:09:46.117994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:18.007 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:18.008 { 00:10:18.008 "name": "Malloc1", 00:10:18.008 "aliases": [ 00:10:18.008 "8f74c3ce-d928-45ee-ae45-a3dfc105ab29" 00:10:18.008 ], 00:10:18.008 "product_name": "Malloc disk", 00:10:18.008 "block_size": 512, 00:10:18.008 "num_blocks": 1048576, 00:10:18.008 "uuid": "8f74c3ce-d928-45ee-ae45-a3dfc105ab29", 00:10:18.008 "assigned_rate_limits": { 00:10:18.008 "rw_ios_per_sec": 0, 00:10:18.008 "rw_mbytes_per_sec": 0, 00:10:18.008 "r_mbytes_per_sec": 0, 00:10:18.008 "w_mbytes_per_sec": 0 00:10:18.008 }, 00:10:18.008 "claimed": true, 00:10:18.008 "claim_type": "exclusive_write", 00:10:18.008 "zoned": false, 00:10:18.008 "supported_io_types": { 00:10:18.008 "read": true, 00:10:18.008 "write": true, 00:10:18.008 "unmap": true, 00:10:18.008 "flush": true, 00:10:18.008 "reset": true, 00:10:18.008 "nvme_admin": false, 00:10:18.008 "nvme_io": false, 00:10:18.008 "nvme_io_md": false, 00:10:18.008 "write_zeroes": true, 00:10:18.008 "zcopy": true, 00:10:18.008 "get_zone_info": false, 00:10:18.008 "zone_management": false, 00:10:18.008 "zone_append": false, 00:10:18.008 "compare": false, 00:10:18.008 "compare_and_write": false, 00:10:18.008 "abort": true, 00:10:18.008 "seek_hole": false, 00:10:18.008 "seek_data": false, 00:10:18.008 "copy": true, 00:10:18.008 "nvme_iov_md": false 00:10:18.008 }, 00:10:18.008 "memory_domains": [ 00:10:18.008 { 00:10:18.008 "dma_device_id": "system", 00:10:18.008 "dma_device_type": 1 00:10:18.008 }, 00:10:18.008 { 00:10:18.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.008 "dma_device_type": 2 00:10:18.008 } 00:10:18.008 ], 00:10:18.008 "driver_specific": {} 00:10:18.008 } 00:10:18.008 ]' 00:10:18.008 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:18.266 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.831 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.831 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.831 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.831 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.831 02:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:21.358 02:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:21.358 02:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:21.924 02:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.297 ************************************ 00:10:23.297 START TEST filesystem_ext4 00:10:23.297 ************************************ 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:23.297 02:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:23.297 mke2fs 1.46.5 (30-Dec-2021) 00:10:23.297 Discarding device blocks: 0/522240 done 00:10:23.297 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:23.297 Filesystem UUID: d3a7e2fa-5c69-44b0-a082-c38ec05c29b5 00:10:23.297 Superblock backups stored on blocks: 00:10:23.298 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:23.298 00:10:23.298 Allocating group tables: 0/64 done 00:10:23.298 Writing inode tables: 0/64 done 00:10:23.298 Creating journal (8192 blocks): done 00:10:24.378 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:10:24.378 00:10:24.378 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:24.378 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:24.635 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 962375 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.893 00:10:24.893 real 0m1.756s 00:10:24.893 user 0m0.027s 00:10:24.893 sys 0m0.052s 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:24.893 ************************************ 00:10:24.893 END TEST filesystem_ext4 00:10:24.893 ************************************ 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.893 ************************************ 00:10:24.893 START TEST filesystem_btrfs 00:10:24.893 ************************************ 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:24.893 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:24.894 02:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:25.151 btrfs-progs v6.6.2 00:10:25.151 See https://btrfs.readthedocs.io for more information. 00:10:25.151 00:10:25.151 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:25.152 NOTE: several default settings have changed in version 5.15, please make sure 00:10:25.152 this does not affect your deployments: 00:10:25.152 - DUP for metadata (-m dup) 00:10:25.152 - enabled no-holes (-O no-holes) 00:10:25.152 - enabled free-space-tree (-R free-space-tree) 00:10:25.152 00:10:25.152 Label: (null) 00:10:25.152 UUID: a3fc46be-a045-40b5-9324-7c125eeb2552 00:10:25.152 Node size: 16384 00:10:25.152 Sector size: 4096 00:10:25.152 Filesystem size: 510.00MiB 00:10:25.152 Block group profiles: 00:10:25.152 Data: single 8.00MiB 00:10:25.152 Metadata: DUP 32.00MiB 00:10:25.152 System: DUP 8.00MiB 00:10:25.152 SSD detected: yes 00:10:25.152 Zoned device: no 00:10:25.152 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:25.152 Runtime features: free-space-tree 00:10:25.152 Checksum: crc32c 00:10:25.152 Number of devices: 1 00:10:25.152 Devices: 00:10:25.152 ID SIZE PATH 00:10:25.152 1 510.00MiB /dev/nvme0n1p1 00:10:25.152 00:10:25.152 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:25.152 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.084 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.084 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:26.084 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.084 02:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 962375 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.084 00:10:26.084 real 0m1.190s 00:10:26.084 user 0m0.022s 00:10:26.084 sys 0m0.115s 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.084 ************************************ 00:10:26.084 END TEST filesystem_btrfs 00:10:26.084 ************************************ 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.084 ************************************ 00:10:26.084 START TEST filesystem_xfs 00:10:26.084 ************************************ 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:26.084 02:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:26.084 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:26.084 = sectsz=512 attr=2, projid32bit=1 00:10:26.084 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:26.084 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:26.084 data = bsize=4096 blocks=130560, imaxpct=25 00:10:26.084 = sunit=0 swidth=0 blks 00:10:26.084 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:26.084 log =internal log bsize=4096 blocks=16384, version=2 00:10:26.084 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:26.084 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:27.457 Discarding blocks...Done. 00:10:27.457 02:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:27.458 02:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:29.406 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 962375 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:29.663 00:10:29.663 real 0m3.487s 00:10:29.663 user 0m0.012s 00:10:29.663 sys 0m0.062s 00:10:29.663 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.664 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:29.664 ************************************ 00:10:29.664 END TEST filesystem_xfs 00:10:29.664 ************************************ 00:10:29.664 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 962375 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 962375 ']' 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 962375 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.922 02:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 962375 00:10:29.922 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.922 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.922 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 962375' 00:10:29.922 killing process with pid 962375 00:10:29.922 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 962375 00:10:29.922 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 962375 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:30.489 00:10:30.489 real 0m12.885s 00:10:30.489 user 0m49.487s 00:10:30.489 sys 0m1.936s 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.489 ************************************ 00:10:30.489 END TEST nvmf_filesystem_no_in_capsule 00:10:30.489 ************************************ 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.489 ************************************ 00:10:30.489 START TEST nvmf_filesystem_in_capsule 00:10:30.489 ************************************ 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=964181 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 964181 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 964181 ']' 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.489 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.489 [2024-07-27 02:09:58.570966] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:10:30.489 [2024-07-27 02:09:58.571049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.489 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.490 [2024-07-27 02:09:58.612440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.490 [2024-07-27 02:09:58.638878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.748 [2024-07-27 02:09:58.727489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.748 [2024-07-27 02:09:58.727541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.748 [2024-07-27 02:09:58.727569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.748 [2024-07-27 02:09:58.727581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.748 [2024-07-27 02:09:58.727591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.748 [2024-07-27 02:09:58.727647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.748 [2024-07-27 02:09:58.727704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.748 [2024-07-27 02:09:58.727770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.748 [2024-07-27 02:09:58.727772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:30.748 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.749 [2024-07-27 02:09:58.869257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.749 02:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 Malloc1 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.007 [2024-07-27 02:09:59.048030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:31.007 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:31.008 { 00:10:31.008 "name": "Malloc1", 00:10:31.008 "aliases": [ 00:10:31.008 "15b6874d-0b3a-4b00-8ae6-f5e9613d33dd" 00:10:31.008 ], 00:10:31.008 "product_name": "Malloc disk", 00:10:31.008 "block_size": 512, 00:10:31.008 "num_blocks": 1048576, 00:10:31.008 "uuid": "15b6874d-0b3a-4b00-8ae6-f5e9613d33dd", 00:10:31.008 "assigned_rate_limits": { 00:10:31.008 "rw_ios_per_sec": 0, 00:10:31.008 "rw_mbytes_per_sec": 0, 00:10:31.008 "r_mbytes_per_sec": 0, 00:10:31.008 "w_mbytes_per_sec": 0 00:10:31.008 }, 00:10:31.008 "claimed": true, 00:10:31.008 "claim_type": "exclusive_write", 00:10:31.008 "zoned": false, 00:10:31.008 "supported_io_types": { 00:10:31.008 "read": true, 00:10:31.008 "write": true, 00:10:31.008 "unmap": true, 00:10:31.008 "flush": true, 00:10:31.008 "reset": true, 00:10:31.008 "nvme_admin": false, 00:10:31.008 "nvme_io": false, 00:10:31.008 "nvme_io_md": false, 00:10:31.008 "write_zeroes": true, 00:10:31.008 "zcopy": true, 00:10:31.008 "get_zone_info": false, 00:10:31.008 "zone_management": false, 00:10:31.008 "zone_append": false, 00:10:31.008 "compare": false, 00:10:31.008 "compare_and_write": false, 00:10:31.008 "abort": true, 00:10:31.008 "seek_hole": false, 00:10:31.008 "seek_data": false, 00:10:31.008 "copy": true, 00:10:31.008 "nvme_iov_md": false 00:10:31.008 }, 00:10:31.008 "memory_domains": [ 00:10:31.008 { 00:10:31.008 "dma_device_id": "system", 00:10:31.008 "dma_device_type": 1 00:10:31.008 }, 00:10:31.008 { 00:10:31.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.008 "dma_device_type": 2 00:10:31.008 } 00:10:31.008 ], 00:10:31.008 "driver_specific": {} 00:10:31.008 } 00:10:31.008 ]' 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:31.008 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.941 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.941 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.941 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.941 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.941 02:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:33.839 02:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:34.097 02:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:34.661 02:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.032 ************************************ 00:10:36.032 START TEST filesystem_in_capsule_ext4 00:10:36.032 ************************************ 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:36.032 02:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.032 mke2fs 1.46.5 (30-Dec-2021) 00:10:36.032 Discarding device blocks: 0/522240 done 00:10:36.032 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.032 Filesystem UUID: 9e14336f-14d1-4ab8-91e6-adba92021b9b 00:10:36.032 Superblock backups stored on blocks: 00:10:36.032 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.032 00:10:36.032 Allocating group tables: 0/64 done 00:10:36.032 Writing inode tables: 0/64 done 00:10:36.032 Creating journal (8192 blocks): done 00:10:36.288 Writing superblocks and filesystem accounting information: 0/64 done 00:10:36.288 00:10:36.288 02:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:36.288 02:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.851 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 964181 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.109 00:10:37.109 real 0m1.277s 00:10:37.109 user 0m0.012s 00:10:37.109 sys 0m0.064s 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:37.109 ************************************ 00:10:37.109 END TEST filesystem_in_capsule_ext4 00:10:37.109 ************************************ 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.109 ************************************ 00:10:37.109 START TEST filesystem_in_capsule_btrfs 00:10:37.109 ************************************ 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:37.109 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:37.367 btrfs-progs v6.6.2 00:10:37.367 See https://btrfs.readthedocs.io for more information. 00:10:37.367 00:10:37.367 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:37.367 NOTE: several default settings have changed in version 5.15, please make sure 00:10:37.367 this does not affect your deployments: 00:10:37.367 - DUP for metadata (-m dup) 00:10:37.367 - enabled no-holes (-O no-holes) 00:10:37.367 - enabled free-space-tree (-R free-space-tree) 00:10:37.367 00:10:37.367 Label: (null) 00:10:37.367 UUID: feb4f844-e079-4974-a235-428286fb6c26 00:10:37.367 Node size: 16384 00:10:37.367 Sector size: 4096 00:10:37.367 Filesystem size: 510.00MiB 00:10:37.367 Block group profiles: 00:10:37.367 Data: single 8.00MiB 00:10:37.367 Metadata: DUP 32.00MiB 00:10:37.367 System: DUP 8.00MiB 00:10:37.367 SSD detected: yes 00:10:37.367 Zoned device: no 00:10:37.367 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:37.367 Runtime features: free-space-tree 00:10:37.367 Checksum: crc32c 00:10:37.367 Number of devices: 1 00:10:37.367 Devices: 00:10:37.367 ID SIZE PATH 00:10:37.367 1 510.00MiB /dev/nvme0n1p1 00:10:37.367 00:10:37.367 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:37.367 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 964181 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.932 02:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.932 00:10:37.932 real 0m0.852s 00:10:37.932 user 0m0.015s 00:10:37.932 sys 0m0.123s 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.932 ************************************ 00:10:37.932 END TEST filesystem_in_capsule_btrfs 00:10:37.932 ************************************ 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.932 ************************************ 00:10:37.932 START TEST filesystem_in_capsule_xfs 00:10:37.932 ************************************ 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:37.932 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:38.189 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:38.189 = sectsz=512 attr=2, projid32bit=1 00:10:38.189 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:38.189 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:38.189 data = bsize=4096 blocks=130560, imaxpct=25 00:10:38.189 = sunit=0 swidth=0 blks 00:10:38.189 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:38.190 log =internal log bsize=4096 blocks=16384, version=2 00:10:38.190 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:38.190 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:38.754 Discarding blocks...Done. 00:10:38.754 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:38.754 02:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.295 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.295 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:41.295 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.295 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:41.295 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.296 00:10:41.296 real 0m3.170s 00:10:41.296 user 0m0.014s 00:10:41.296 sys 0m0.058s 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.296 ************************************ 00:10:41.296 END TEST filesystem_in_capsule_xfs 00:10:41.296 ************************************ 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 964181 ']' 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964181' 00:10:41.296 killing process with pid 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 964181 00:10:41.296 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 964181 00:10:41.863 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:41.863 00:10:41.863 real 0m11.318s 00:10:41.863 user 0m43.313s 00:10:41.863 sys 0m1.782s 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.864 ************************************ 00:10:41.864 END TEST nvmf_filesystem_in_capsule 00:10:41.864 ************************************ 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.864 rmmod nvme_tcp 00:10:41.864 rmmod nvme_fabrics 00:10:41.864 rmmod nvme_keyring 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.864 02:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.399 00:10:44.399 real 0m28.857s 00:10:44.399 user 1m33.749s 00:10:44.399 sys 0m5.432s 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.399 ************************************ 00:10:44.399 END TEST nvmf_filesystem 00:10:44.399 ************************************ 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.399 02:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.399 ************************************ 00:10:44.399 START TEST nvmf_target_discovery 00:10:44.399 ************************************ 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:44.399 * Looking for test storage... 00:10:44.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.399 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.400 02:10:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.776 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.776 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.776 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.776 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:45.777 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:45.777 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.777 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.777 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.777 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:46.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:10:46.035 00:10:46.035 --- 10.0.0.2 ping statistics --- 00:10:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.035 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:46.035 02:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:10:46.035 00:10:46.035 --- 10.0.0.1 ping statistics --- 00:10:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.035 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=967530 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 967530 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 967530 ']' 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.035 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.035 [2024-07-27 02:10:14.076665] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:10:46.035 [2024-07-27 02:10:14.076744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.035 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.035 [2024-07-27 02:10:14.114860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:46.035 [2024-07-27 02:10:14.147064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.324 [2024-07-27 02:10:14.240959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.324 [2024-07-27 02:10:14.241026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.324 [2024-07-27 02:10:14.241043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.324 [2024-07-27 02:10:14.241057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.324 [2024-07-27 02:10:14.241088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.324 [2024-07-27 02:10:14.241144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.324 [2024-07-27 02:10:14.241178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.324 [2024-07-27 02:10:14.241306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.324 [2024-07-27 02:10:14.241308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 [2024-07-27 02:10:14.393615] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 Null1 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 [2024-07-27 02:10:14.433946] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 Null2 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.324 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 Null3 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 Null4 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:46.584 00:10:46.584 Discovery Log Number of Records 6, Generation counter 6 00:10:46.584 =====Discovery Log Entry 0====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: current discovery subsystem 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4420 00:10:46.584 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: explicit discovery connections, duplicate discovery information 00:10:46.584 sectype: none 00:10:46.584 =====Discovery Log Entry 1====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: nvme subsystem 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4420 00:10:46.584 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: none 00:10:46.584 sectype: none 00:10:46.584 =====Discovery Log Entry 2====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: nvme subsystem 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4420 00:10:46.584 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: none 00:10:46.584 sectype: none 00:10:46.584 =====Discovery Log Entry 3====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: nvme subsystem 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4420 00:10:46.584 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: none 00:10:46.584 sectype: none 00:10:46.584 =====Discovery Log Entry 4====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: nvme subsystem 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4420 00:10:46.584 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: none 00:10:46.584 sectype: none 00:10:46.584 =====Discovery Log Entry 5====== 00:10:46.584 trtype: tcp 00:10:46.584 adrfam: ipv4 00:10:46.584 subtype: discovery subsystem referral 00:10:46.584 treq: not required 00:10:46.584 portid: 0 00:10:46.584 trsvcid: 4430 00:10:46.584 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.584 traddr: 10.0.0.2 00:10:46.584 eflags: none 00:10:46.584 sectype: none 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:46.584 Perform nvmf subsystem discovery via RPC 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.584 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 [ 00:10:46.584 { 00:10:46.584 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:46.584 "subtype": "Discovery", 00:10:46.584 "listen_addresses": [ 00:10:46.584 { 00:10:46.584 "trtype": "TCP", 00:10:46.584 "adrfam": "IPv4", 00:10:46.584 "traddr": "10.0.0.2", 00:10:46.584 "trsvcid": "4420" 00:10:46.584 } 00:10:46.584 ], 00:10:46.584 "allow_any_host": true, 00:10:46.584 "hosts": [] 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.584 "subtype": "NVMe", 00:10:46.584 "listen_addresses": [ 00:10:46.584 { 00:10:46.584 "trtype": "TCP", 00:10:46.584 "adrfam": "IPv4", 00:10:46.584 "traddr": "10.0.0.2", 00:10:46.584 "trsvcid": "4420" 00:10:46.584 } 00:10:46.584 ], 00:10:46.584 "allow_any_host": true, 00:10:46.584 "hosts": [], 00:10:46.584 "serial_number": "SPDK00000000000001", 00:10:46.584 "model_number": "SPDK bdev Controller", 00:10:46.584 "max_namespaces": 32, 00:10:46.584 "min_cntlid": 1, 00:10:46.584 "max_cntlid": 65519, 00:10:46.584 "namespaces": [ 00:10:46.584 { 00:10:46.584 "nsid": 1, 00:10:46.584 "bdev_name": "Null1", 00:10:46.584 "name": "Null1", 00:10:46.584 "nguid": "B30F36D332AE43DBB181DFAF974C4AF5", 00:10:46.584 "uuid": "b30f36d3-32ae-43db-b181-dfaf974c4af5" 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.584 "subtype": "NVMe", 00:10:46.584 "listen_addresses": [ 00:10:46.584 { 00:10:46.584 "trtype": "TCP", 00:10:46.584 "adrfam": "IPv4", 00:10:46.584 "traddr": "10.0.0.2", 00:10:46.584 "trsvcid": "4420" 00:10:46.584 } 00:10:46.584 ], 00:10:46.584 "allow_any_host": true, 00:10:46.584 "hosts": [], 00:10:46.584 "serial_number": "SPDK00000000000002", 00:10:46.584 "model_number": "SPDK bdev Controller", 00:10:46.584 "max_namespaces": 32, 00:10:46.584 "min_cntlid": 1, 00:10:46.584 "max_cntlid": 65519, 00:10:46.584 "namespaces": [ 00:10:46.584 { 00:10:46.584 "nsid": 1, 00:10:46.584 "bdev_name": "Null2", 00:10:46.584 "name": "Null2", 00:10:46.584 "nguid": "9B59863BC7B34AA68EE1C0B2E1E1ED06", 00:10:46.584 "uuid": "9b59863b-c7b3-4aa6-8ee1-c0b2e1e1ed06" 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:46.584 "subtype": "NVMe", 00:10:46.584 "listen_addresses": [ 00:10:46.584 { 00:10:46.584 "trtype": "TCP", 00:10:46.584 "adrfam": "IPv4", 00:10:46.584 "traddr": "10.0.0.2", 00:10:46.584 "trsvcid": "4420" 00:10:46.584 } 00:10:46.584 ], 00:10:46.584 "allow_any_host": true, 00:10:46.584 "hosts": [], 00:10:46.584 "serial_number": "SPDK00000000000003", 00:10:46.584 "model_number": "SPDK bdev Controller", 00:10:46.584 "max_namespaces": 32, 00:10:46.584 "min_cntlid": 1, 00:10:46.584 "max_cntlid": 65519, 00:10:46.584 "namespaces": [ 00:10:46.584 { 00:10:46.584 "nsid": 1, 00:10:46.584 "bdev_name": "Null3", 00:10:46.584 "name": "Null3", 00:10:46.584 "nguid": "B4BDDB750882450D9F54F3A2ADAF6E59", 00:10:46.584 "uuid": "b4bddb75-0882-450d-9f54-f3a2adaf6e59" 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:46.584 "subtype": "NVMe", 00:10:46.584 "listen_addresses": [ 00:10:46.584 { 00:10:46.584 "trtype": "TCP", 00:10:46.584 "adrfam": "IPv4", 00:10:46.584 "traddr": "10.0.0.2", 00:10:46.584 "trsvcid": "4420" 00:10:46.584 } 00:10:46.584 ], 00:10:46.584 "allow_any_host": true, 00:10:46.584 "hosts": [], 00:10:46.584 "serial_number": "SPDK00000000000004", 00:10:46.584 "model_number": "SPDK bdev Controller", 00:10:46.584 "max_namespaces": 32, 00:10:46.584 "min_cntlid": 1, 00:10:46.584 "max_cntlid": 65519, 00:10:46.584 "namespaces": [ 00:10:46.584 { 00:10:46.584 "nsid": 1, 00:10:46.584 "bdev_name": "Null4", 00:10:46.584 "name": "Null4", 00:10:46.584 "nguid": "C1C24531CE8C4C80900E33717A68F38C", 00:10:46.584 "uuid": "c1c24531-ce8c-4c80-900e-33717a68f38c" 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 } 00:10:46.584 ] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.585 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.844 rmmod nvme_tcp 00:10:46.844 rmmod nvme_fabrics 00:10:46.844 rmmod nvme_keyring 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 967530 ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 967530 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 967530 ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 967530 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 967530 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 967530' 00:10:46.844 killing process with pid 967530 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 967530 00:10:46.844 02:10:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 967530 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.104 02:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:49.012 00:10:49.012 real 0m5.133s 00:10:49.012 user 0m4.039s 00:10:49.012 sys 0m1.698s 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.012 ************************************ 00:10:49.012 END TEST nvmf_target_discovery 00:10:49.012 ************************************ 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.012 02:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.272 ************************************ 00:10:49.272 START TEST nvmf_referrals 00:10:49.272 ************************************ 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.272 * Looking for test storage... 00:10:49.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.272 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.273 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.273 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.273 02:10:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.175 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.176 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:10:51.435 00:10:51.435 --- 10.0.0.2 ping statistics --- 00:10:51.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.435 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:10:51.435 00:10:51.435 --- 10.0.0.1 ping statistics --- 00:10:51.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.435 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=969615 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 969615 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 969615 ']' 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.435 02:10:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.435 [2024-07-27 02:10:19.489556] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:10:51.435 [2024-07-27 02:10:19.489644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.435 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.435 [2024-07-27 02:10:19.534816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:51.435 [2024-07-27 02:10:19.566194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.694 [2024-07-27 02:10:19.660356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.694 [2024-07-27 02:10:19.660417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.694 [2024-07-27 02:10:19.660434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.694 [2024-07-27 02:10:19.660449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.694 [2024-07-27 02:10:19.660461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.694 [2024-07-27 02:10:19.660551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.694 [2024-07-27 02:10:19.660605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.694 [2024-07-27 02:10:19.660658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.694 [2024-07-27 02:10:19.660661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.260 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.260 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:52.260 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.260 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.260 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 [2024-07-27 02:10:20.442753] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 [2024-07-27 02:10:20.454950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.518 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.777 02:10:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.035 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.293 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.551 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.809 02:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.067 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.067 rmmod nvme_tcp 00:10:54.067 rmmod nvme_fabrics 00:10:54.067 rmmod nvme_keyring 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 969615 ']' 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 969615 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 969615 ']' 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 969615 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 969615 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 969615' 00:10:54.068 killing process with pid 969615 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 969615 00:10:54.068 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 969615 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.326 02:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.862 00:10:56.862 real 0m7.248s 00:10:56.862 user 0m12.232s 00:10:56.862 sys 0m2.256s 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.862 ************************************ 00:10:56.862 END TEST nvmf_referrals 00:10:56.862 ************************************ 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.862 ************************************ 00:10:56.862 START TEST nvmf_connect_disconnect 00:10:56.862 ************************************ 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:56.862 * Looking for test storage... 00:10:56.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.862 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.863 02:10:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:58.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.763 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:58.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:58.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:58.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:10:58.764 00:10:58.764 --- 10.0.0.2 ping statistics --- 00:10:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.764 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:10:58.764 00:10:58.764 --- 10.0.0.1 ping statistics --- 00:10:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.764 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=971914 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 971914 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 971914 ']' 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.764 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.764 [2024-07-27 02:10:26.719647] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:10:58.764 [2024-07-27 02:10:26.719738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.764 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.764 [2024-07-27 02:10:26.759714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:58.764 [2024-07-27 02:10:26.787164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.764 [2024-07-27 02:10:26.872784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.764 [2024-07-27 02:10:26.872836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.765 [2024-07-27 02:10:26.872864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.765 [2024-07-27 02:10:26.872877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.765 [2024-07-27 02:10:26.872887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.765 [2024-07-27 02:10:26.872968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.765 [2024-07-27 02:10:26.873082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.765 [2024-07-27 02:10:26.873109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.765 [2024-07-27 02:10:26.873112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.023 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.023 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:59.023 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.023 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.023 02:10:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 [2024-07-27 02:10:27.026589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.023 [2024-07-27 02:10:27.087727] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:59.023 02:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:01.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.618 rmmod nvme_tcp 00:14:49.618 rmmod nvme_fabrics 00:14:49.618 rmmod nvme_keyring 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 971914 ']' 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 971914 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 971914 ']' 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 971914 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 971914 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 971914' 00:14:49.618 killing process with pid 971914 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 971914 00:14:49.618 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 971914 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.878 02:14:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.782 00:14:51.782 real 3m55.398s 00:14:51.782 user 14m56.669s 00:14:51.782 sys 0m34.434s 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:51.782 ************************************ 00:14:51.782 END TEST nvmf_connect_disconnect 00:14:51.782 ************************************ 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.782 ************************************ 00:14:51.782 START TEST nvmf_multitarget 00:14:51.782 ************************************ 00:14:51.782 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.043 * Looking for test storage... 00:14:52.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.043 02:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.043 02:14:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.043 02:14:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.043 02:14:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.043 02:14:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.946 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:53.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:53.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:53.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:53.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.947 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:54.206 00:14:54.206 --- 10.0.0.2 ping statistics --- 00:14:54.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.206 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:54.206 00:14:54.206 --- 10.0.0.1 ping statistics --- 00:14:54.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.206 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1002915 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1002915 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1002915 ']' 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.206 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.206 [2024-07-27 02:14:22.262622] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:14:54.206 [2024-07-27 02:14:22.262697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.206 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.206 [2024-07-27 02:14:22.302774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:54.206 [2024-07-27 02:14:22.330028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.464 [2024-07-27 02:14:22.417160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.464 [2024-07-27 02:14:22.417213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.464 [2024-07-27 02:14:22.417238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.464 [2024-07-27 02:14:22.417248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.464 [2024-07-27 02:14:22.417259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.464 [2024-07-27 02:14:22.417329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.464 [2024-07-27 02:14:22.417387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.465 [2024-07-27 02:14:22.417454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.465 [2024-07-27 02:14:22.417456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:54.465 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:54.723 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:54.723 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:54.723 "nvmf_tgt_1" 00:14:54.723 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:54.981 "nvmf_tgt_2" 00:14:54.981 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:54.981 02:14:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:54.981 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:54.981 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:54.981 true 00:14:54.981 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:55.242 true 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.242 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.242 rmmod nvme_tcp 00:14:55.242 rmmod nvme_fabrics 00:14:55.242 rmmod nvme_keyring 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1002915 ']' 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1002915 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1002915 ']' 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1002915 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.500 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1002915 00:14:55.501 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.501 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.501 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1002915' 00:14:55.501 killing process with pid 1002915 00:14:55.501 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1002915 00:14:55.501 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1002915 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.760 02:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.666 00:14:57.666 real 0m5.775s 00:14:57.666 user 0m6.470s 00:14:57.666 sys 0m1.928s 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 ************************************ 00:14:57.666 END TEST nvmf_multitarget 00:14:57.666 ************************************ 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 ************************************ 00:14:57.666 START TEST nvmf_rpc 00:14:57.666 ************************************ 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:57.666 * Looking for test storage... 00:14:57.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.666 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:57.667 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.927 02:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.833 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:59.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:59.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:59.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:59.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:14:59.834 00:14:59.834 --- 10.0.0.2 ping statistics --- 00:14:59.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.834 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:14:59.834 00:14:59.834 --- 10.0.0.1 ping statistics --- 00:14:59.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.834 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1005030 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1005030 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1005030 ']' 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.834 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.835 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.835 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.835 02:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 [2024-07-27 02:14:27.984216] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:14:59.835 [2024-07-27 02:14:27.984291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.107 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.107 [2024-07-27 02:14:28.023789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.107 [2024-07-27 02:14:28.051330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.107 [2024-07-27 02:14:28.137194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.107 [2024-07-27 02:14:28.137246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.107 [2024-07-27 02:14:28.137269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.107 [2024-07-27 02:14:28.137280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.107 [2024-07-27 02:14:28.137291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.107 [2024-07-27 02:14:28.137349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.107 [2024-07-27 02:14:28.137408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.107 [2024-07-27 02:14:28.137473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.107 [2024-07-27 02:14:28.137475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.107 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.107 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.107 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.107 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.107 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:00.367 "tick_rate": 2700000000, 00:15:00.367 "poll_groups": [ 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_000", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_001", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_002", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_003", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [] 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 }' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.367 [2024-07-27 02:14:28.356496] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:00.367 "tick_rate": 2700000000, 00:15:00.367 "poll_groups": [ 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_000", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [ 00:15:00.367 { 00:15:00.367 "trtype": "TCP" 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_001", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [ 00:15:00.367 { 00:15:00.367 "trtype": "TCP" 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_002", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [ 00:15:00.367 { 00:15:00.367 "trtype": "TCP" 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 }, 00:15:00.367 { 00:15:00.367 "name": "nvmf_tgt_poll_group_003", 00:15:00.367 "admin_qpairs": 0, 00:15:00.367 "io_qpairs": 0, 00:15:00.367 "current_admin_qpairs": 0, 00:15:00.367 "current_io_qpairs": 0, 00:15:00.367 "pending_bdev_io": 0, 00:15:00.367 "completed_nvme_io": 0, 00:15:00.367 "transports": [ 00:15:00.367 { 00:15:00.367 "trtype": "TCP" 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 } 00:15:00.367 ] 00:15:00.367 }' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:00.367 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 Malloc1 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 [2024-07-27 02:14:28.501656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:00.368 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:00.368 [2024-07-27 02:14:28.524170] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:00.628 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:00.628 could not add new controller: failed to write to nvme-fabrics device 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.628 02:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.196 02:14:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.196 02:14:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.196 02:14:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.196 02:14:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:01.196 02:14:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:03.099 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.359 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.360 [2024-07-27 02:14:31.344417] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:03.360 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:03.360 could not add new controller: failed to write to nvme-fabrics device 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.360 02:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.929 02:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.929 02:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.929 02:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.929 02:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.929 02:14:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:06.463 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.464 [2024-07-27 02:14:34.176708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.464 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.723 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.723 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.723 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.723 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.723 02:14:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:09.256 02:14:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 [2024-07-27 02:14:37.089151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.256 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.257 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.826 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.826 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.826 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.826 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:09.826 02:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 [2024-07-27 02:14:39.871086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.728 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.729 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.729 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.729 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.987 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.987 02:14:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.553 02:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.553 02:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.553 02:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.553 02:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:12.553 02:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:14.457 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 [2024-07-27 02:14:42.594264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.458 02:14:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.393 02:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.393 02:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.393 02:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.393 02:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:15.393 02:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:17.300 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 [2024-07-27 02:14:45.327716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.301 02:14:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.870 02:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.870 02:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.870 02:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.871 02:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:17.871 02:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.406 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 [2024-07-27 02:14:48.156270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 [2024-07-27 02:14:48.204307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 [2024-07-27 02:14:48.252494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.407 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 [2024-07-27 02:14:48.300639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 [2024-07-27 02:14:48.348804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:20.408 "tick_rate": 2700000000, 00:15:20.408 "poll_groups": [ 00:15:20.408 { 00:15:20.408 "name": "nvmf_tgt_poll_group_000", 00:15:20.408 "admin_qpairs": 2, 00:15:20.408 "io_qpairs": 84, 00:15:20.408 "current_admin_qpairs": 0, 00:15:20.408 "current_io_qpairs": 0, 00:15:20.408 "pending_bdev_io": 0, 00:15:20.408 "completed_nvme_io": 207, 00:15:20.408 "transports": [ 00:15:20.408 { 00:15:20.408 "trtype": "TCP" 00:15:20.408 } 00:15:20.408 ] 00:15:20.408 }, 00:15:20.408 { 00:15:20.408 "name": "nvmf_tgt_poll_group_001", 00:15:20.408 "admin_qpairs": 2, 00:15:20.408 "io_qpairs": 84, 00:15:20.408 "current_admin_qpairs": 0, 00:15:20.408 "current_io_qpairs": 0, 00:15:20.408 "pending_bdev_io": 0, 00:15:20.408 "completed_nvme_io": 151, 00:15:20.408 "transports": [ 00:15:20.408 { 00:15:20.408 "trtype": "TCP" 00:15:20.408 } 00:15:20.408 ] 00:15:20.408 }, 00:15:20.408 { 00:15:20.408 "name": "nvmf_tgt_poll_group_002", 00:15:20.408 "admin_qpairs": 1, 00:15:20.408 "io_qpairs": 84, 00:15:20.408 "current_admin_qpairs": 0, 00:15:20.408 "current_io_qpairs": 0, 00:15:20.408 "pending_bdev_io": 0, 00:15:20.408 "completed_nvme_io": 141, 00:15:20.408 "transports": [ 00:15:20.408 { 00:15:20.408 "trtype": "TCP" 00:15:20.408 } 00:15:20.408 ] 00:15:20.408 }, 00:15:20.408 { 00:15:20.408 "name": "nvmf_tgt_poll_group_003", 00:15:20.408 "admin_qpairs": 2, 00:15:20.408 "io_qpairs": 84, 00:15:20.408 "current_admin_qpairs": 0, 00:15:20.408 "current_io_qpairs": 0, 00:15:20.408 "pending_bdev_io": 0, 00:15:20.408 "completed_nvme_io": 187, 00:15:20.408 "transports": [ 00:15:20.408 { 00:15:20.408 "trtype": "TCP" 00:15:20.408 } 00:15:20.408 ] 00:15:20.408 } 00:15:20.408 ] 00:15:20.408 }' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.408 rmmod nvme_tcp 00:15:20.408 rmmod nvme_fabrics 00:15:20.408 rmmod nvme_keyring 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1005030 ']' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1005030 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1005030 ']' 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1005030 00:15:20.408 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:20.668 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.668 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1005030 00:15:20.668 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.669 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.669 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1005030' 00:15:20.669 killing process with pid 1005030 00:15:20.669 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1005030 00:15:20.669 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1005030 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.929 02:14:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.840 00:15:22.840 real 0m25.153s 00:15:22.840 user 1m22.005s 00:15:22.840 sys 0m3.992s 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.840 ************************************ 00:15:22.840 END TEST nvmf_rpc 00:15:22.840 ************************************ 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:22.840 ************************************ 00:15:22.840 START TEST nvmf_invalid 00:15:22.840 ************************************ 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:22.840 * Looking for test storage... 00:15:22.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.840 02:14:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.099 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.100 02:14:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.008 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.008 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.008 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.008 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:25.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:25.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:25.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:25.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:15:25.009 00:15:25.009 --- 10.0.0.2 ping statistics --- 00:15:25.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.009 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:15:25.009 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:15:25.272 00:15:25.272 --- 10.0.0.1 ping statistics --- 00:15:25.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.272 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.272 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1009604 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1009604 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1009604 ']' 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.273 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.273 [2024-07-27 02:14:53.249121] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:15:25.273 [2024-07-27 02:14:53.249206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.273 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.273 [2024-07-27 02:14:53.293482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:25.273 [2024-07-27 02:14:53.324229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.273 [2024-07-27 02:14:53.417827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.273 [2024-07-27 02:14:53.417889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.273 [2024-07-27 02:14:53.417917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.273 [2024-07-27 02:14:53.417931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.273 [2024-07-27 02:14:53.417944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.273 [2024-07-27 02:14:53.418025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.273 [2024-07-27 02:14:53.418094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.273 [2024-07-27 02:14:53.418122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.273 [2024-07-27 02:14:53.418125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:25.531 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18946 00:15:25.788 [2024-07-27 02:14:53.829477] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:25.788 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:25.788 { 00:15:25.788 "nqn": "nqn.2016-06.io.spdk:cnode18946", 00:15:25.788 "tgt_name": "foobar", 00:15:25.788 "method": "nvmf_create_subsystem", 00:15:25.788 "req_id": 1 00:15:25.788 } 00:15:25.788 Got JSON-RPC error response 00:15:25.788 response: 00:15:25.788 { 00:15:25.788 "code": -32603, 00:15:25.788 "message": "Unable to find target foobar" 00:15:25.788 }' 00:15:25.788 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:25.788 { 00:15:25.788 "nqn": "nqn.2016-06.io.spdk:cnode18946", 00:15:25.788 "tgt_name": "foobar", 00:15:25.788 "method": "nvmf_create_subsystem", 00:15:25.788 "req_id": 1 00:15:25.788 } 00:15:25.788 Got JSON-RPC error response 00:15:25.788 response: 00:15:25.788 { 00:15:25.788 "code": -32603, 00:15:25.788 "message": "Unable to find target foobar" 00:15:25.788 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:25.788 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:25.788 02:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10359 00:15:26.045 [2024-07-27 02:14:54.106477] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10359: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:26.045 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:26.045 { 00:15:26.045 "nqn": "nqn.2016-06.io.spdk:cnode10359", 00:15:26.045 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.045 "method": "nvmf_create_subsystem", 00:15:26.045 "req_id": 1 00:15:26.045 } 00:15:26.045 Got JSON-RPC error response 00:15:26.045 response: 00:15:26.045 { 00:15:26.045 "code": -32602, 00:15:26.045 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.045 }' 00:15:26.045 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:26.045 { 00:15:26.045 "nqn": "nqn.2016-06.io.spdk:cnode10359", 00:15:26.045 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.045 "method": "nvmf_create_subsystem", 00:15:26.045 "req_id": 1 00:15:26.045 } 00:15:26.045 Got JSON-RPC error response 00:15:26.045 response: 00:15:26.045 { 00:15:26.045 "code": -32602, 00:15:26.045 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.045 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:26.045 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:26.045 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32636 00:15:26.304 [2024-07-27 02:14:54.371323] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32636: invalid model number 'SPDK_Controller' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:26.304 { 00:15:26.304 "nqn": "nqn.2016-06.io.spdk:cnode32636", 00:15:26.304 "model_number": "SPDK_Controller\u001f", 00:15:26.304 "method": "nvmf_create_subsystem", 00:15:26.304 "req_id": 1 00:15:26.304 } 00:15:26.304 Got JSON-RPC error response 00:15:26.304 response: 00:15:26.304 { 00:15:26.304 "code": -32602, 00:15:26.304 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.304 }' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:26.304 { 00:15:26.304 "nqn": "nqn.2016-06.io.spdk:cnode32636", 00:15:26.304 "model_number": "SPDK_Controller\u001f", 00:15:26.304 "method": "nvmf_create_subsystem", 00:15:26.304 "req_id": 1 00:15:26.304 } 00:15:26.304 Got JSON-RPC error response 00:15:26.304 response: 00:15:26.304 { 00:15:26.304 "code": -32602, 00:15:26.304 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.304 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:26.304 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:26.305 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:15:26.562 02:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ouXx$G /dev/null' 00:15:29.662 02:14:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.198 00:15:32.198 real 0m8.836s 00:15:32.198 user 0m21.036s 00:15:32.198 sys 0m2.455s 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:32.198 ************************************ 00:15:32.198 END TEST nvmf_invalid 00:15:32.198 ************************************ 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.198 ************************************ 00:15:32.198 START TEST nvmf_connect_stress 00:15:32.198 ************************************ 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:32.198 * Looking for test storage... 00:15:32.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.198 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.199 02:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.571 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:15:33.829 00:15:33.829 --- 10.0.0.2 ping statistics --- 00:15:33.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.829 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:15:33.829 00:15:33.829 --- 10.0.0.1 ping statistics --- 00:15:33.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.829 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.829 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1012217 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1012217 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1012217 ']' 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.830 02:15:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.830 [2024-07-27 02:15:01.885848] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:15:33.830 [2024-07-27 02:15:01.885936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.830 [2024-07-27 02:15:01.924486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.830 [2024-07-27 02:15:01.956678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.088 [2024-07-27 02:15:02.049726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.088 [2024-07-27 02:15:02.049790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.088 [2024-07-27 02:15:02.049807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.088 [2024-07-27 02:15:02.049821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.088 [2024-07-27 02:15:02.049843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.088 [2024-07-27 02:15:02.049913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.088 [2024-07-27 02:15:02.049942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.088 [2024-07-27 02:15:02.049945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.088 [2024-07-27 02:15:02.189938] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.088 [2024-07-27 02:15:02.219300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.088 NULL1 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1012368 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.088 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.347 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.347 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.347 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.348 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.605 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.606 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:34.606 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.606 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.606 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.863 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.863 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:34.863 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.863 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.863 02:15:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.119 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.119 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:35.119 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.119 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.119 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.687 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.687 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:35.687 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.687 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.687 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.946 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.946 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:35.946 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.946 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.946 02:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.203 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.203 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:36.203 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.203 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.203 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.461 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.461 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:36.461 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.461 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.461 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.719 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.719 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:36.719 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.719 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.719 02:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.285 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.285 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:37.285 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.285 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.285 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.543 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:37.543 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.543 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.543 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.800 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.800 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:37.800 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.800 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.800 02:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.058 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.058 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:38.058 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.058 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.058 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.316 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.316 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:38.316 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.316 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.316 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.912 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.912 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:38.912 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.912 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.912 02:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.174 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.174 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:39.174 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.174 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.174 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.431 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.432 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:39.432 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.432 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.432 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.689 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.689 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:39.689 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.689 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.689 02:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.946 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.946 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:39.946 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.946 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.946 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.511 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.511 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:40.511 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.511 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.511 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.769 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.769 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:40.769 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.769 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.769 02:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.026 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.026 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:41.026 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.026 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.026 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.284 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.284 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:41.284 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.284 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.284 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.542 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.542 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:41.542 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.542 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.542 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.107 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.107 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:42.107 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.107 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.107 02:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.365 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.365 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:42.365 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.365 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.365 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.623 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.623 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:42.623 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.623 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.623 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.880 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.880 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:42.880 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.880 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.880 02:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.140 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.140 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:43.140 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.140 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.140 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.707 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.707 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:43.707 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.707 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.707 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.967 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.967 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:43.967 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.967 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.967 02:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.226 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:44.227 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.227 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.227 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.485 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1012368 00:15:44.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1012368) - No such process 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1012368 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.485 rmmod nvme_tcp 00:15:44.485 rmmod nvme_fabrics 00:15:44.485 rmmod nvme_keyring 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1012217 ']' 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1012217 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1012217 ']' 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1012217 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1012217 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1012217' 00:15:44.485 killing process with pid 1012217 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1012217 00:15:44.485 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1012217 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.745 02:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.290 00:15:47.290 real 0m15.077s 00:15:47.290 user 0m38.170s 00:15:47.290 sys 0m5.899s 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.290 ************************************ 00:15:47.290 END TEST nvmf_connect_stress 00:15:47.290 ************************************ 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.290 ************************************ 00:15:47.290 START TEST nvmf_fused_ordering 00:15:47.290 ************************************ 00:15:47.290 02:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.290 * Looking for test storage... 00:15:47.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.290 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.291 02:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.194 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:49.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:49.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:49.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:49.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.195 02:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:15:49.195 00:15:49.195 --- 10.0.0.2 ping statistics --- 00:15:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.195 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:15:49.195 00:15:49.195 --- 10.0.0.1 ping statistics --- 00:15:49.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.195 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1016015 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.195 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1016015 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1016015 ']' 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.196 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 [2024-07-27 02:15:17.132543] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:15:49.196 [2024-07-27 02:15:17.132612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.196 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.196 [2024-07-27 02:15:17.169948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.196 [2024-07-27 02:15:17.201834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.196 [2024-07-27 02:15:17.296814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.196 [2024-07-27 02:15:17.296881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.196 [2024-07-27 02:15:17.296898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.196 [2024-07-27 02:15:17.296911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.196 [2024-07-27 02:15:17.296923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.196 [2024-07-27 02:15:17.296953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 [2024-07-27 02:15:17.443046] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 [2024-07-27 02:15:17.459276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 NULL1 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.456 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:49.456 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.456 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.456 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.456 02:15:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:49.456 [2024-07-27 02:15:17.504837] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:15:49.456 [2024-07-27 02:15:17.504890] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016046 ] 00:15:49.456 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.456 [2024-07-27 02:15:17.541571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.026 Attached to nqn.2016-06.io.spdk:cnode1 00:15:50.026 Namespace ID: 1 size: 1GB 00:15:50.026 fused_ordering(0) 00:15:50.026 fused_ordering(1) 00:15:50.026 fused_ordering(2) 00:15:50.026 fused_ordering(3) 00:15:50.026 fused_ordering(4) 00:15:50.026 fused_ordering(5) 00:15:50.026 fused_ordering(6) 00:15:50.026 fused_ordering(7) 00:15:50.026 fused_ordering(8) 00:15:50.026 fused_ordering(9) 00:15:50.026 fused_ordering(10) 00:15:50.026 fused_ordering(11) 00:15:50.026 fused_ordering(12) 00:15:50.026 fused_ordering(13) 00:15:50.026 fused_ordering(14) 00:15:50.026 fused_ordering(15) 00:15:50.026 fused_ordering(16) 00:15:50.026 fused_ordering(17) 00:15:50.026 fused_ordering(18) 00:15:50.026 fused_ordering(19) 00:15:50.026 fused_ordering(20) 00:15:50.026 fused_ordering(21) 00:15:50.026 fused_ordering(22) 00:15:50.026 fused_ordering(23) 00:15:50.026 fused_ordering(24) 00:15:50.026 fused_ordering(25) 00:15:50.026 fused_ordering(26) 00:15:50.026 fused_ordering(27) 00:15:50.026 fused_ordering(28) 00:15:50.026 fused_ordering(29) 00:15:50.026 fused_ordering(30) 00:15:50.026 fused_ordering(31) 00:15:50.026 fused_ordering(32) 00:15:50.026 fused_ordering(33) 00:15:50.026 fused_ordering(34) 00:15:50.026 fused_ordering(35) 00:15:50.026 fused_ordering(36) 00:15:50.026 fused_ordering(37) 00:15:50.026 fused_ordering(38) 00:15:50.026 fused_ordering(39) 00:15:50.026 fused_ordering(40) 00:15:50.026 fused_ordering(41) 00:15:50.026 fused_ordering(42) 00:15:50.026 fused_ordering(43) 00:15:50.026 fused_ordering(44) 00:15:50.026 fused_ordering(45) 00:15:50.026 fused_ordering(46) 00:15:50.026 fused_ordering(47) 00:15:50.026 fused_ordering(48) 00:15:50.026 fused_ordering(49) 00:15:50.026 fused_ordering(50) 00:15:50.026 fused_ordering(51) 00:15:50.026 fused_ordering(52) 00:15:50.026 fused_ordering(53) 00:15:50.026 fused_ordering(54) 00:15:50.026 fused_ordering(55) 00:15:50.026 fused_ordering(56) 00:15:50.026 fused_ordering(57) 00:15:50.026 fused_ordering(58) 00:15:50.026 fused_ordering(59) 00:15:50.026 fused_ordering(60) 00:15:50.026 fused_ordering(61) 00:15:50.026 fused_ordering(62) 00:15:50.026 fused_ordering(63) 00:15:50.026 fused_ordering(64) 00:15:50.026 fused_ordering(65) 00:15:50.026 fused_ordering(66) 00:15:50.026 fused_ordering(67) 00:15:50.026 fused_ordering(68) 00:15:50.026 fused_ordering(69) 00:15:50.026 fused_ordering(70) 00:15:50.026 fused_ordering(71) 00:15:50.026 fused_ordering(72) 00:15:50.026 fused_ordering(73) 00:15:50.026 fused_ordering(74) 00:15:50.026 fused_ordering(75) 00:15:50.026 fused_ordering(76) 00:15:50.026 fused_ordering(77) 00:15:50.026 fused_ordering(78) 00:15:50.026 fused_ordering(79) 00:15:50.026 fused_ordering(80) 00:15:50.026 fused_ordering(81) 00:15:50.026 fused_ordering(82) 00:15:50.026 fused_ordering(83) 00:15:50.026 fused_ordering(84) 00:15:50.026 fused_ordering(85) 00:15:50.026 fused_ordering(86) 00:15:50.026 fused_ordering(87) 00:15:50.026 fused_ordering(88) 00:15:50.026 fused_ordering(89) 00:15:50.026 fused_ordering(90) 00:15:50.026 fused_ordering(91) 00:15:50.026 fused_ordering(92) 00:15:50.026 fused_ordering(93) 00:15:50.026 fused_ordering(94) 00:15:50.026 fused_ordering(95) 00:15:50.026 fused_ordering(96) 00:15:50.026 fused_ordering(97) 00:15:50.026 fused_ordering(98) 00:15:50.026 fused_ordering(99) 00:15:50.026 fused_ordering(100) 00:15:50.026 fused_ordering(101) 00:15:50.026 fused_ordering(102) 00:15:50.026 fused_ordering(103) 00:15:50.026 fused_ordering(104) 00:15:50.026 fused_ordering(105) 00:15:50.026 fused_ordering(106) 00:15:50.026 fused_ordering(107) 00:15:50.026 fused_ordering(108) 00:15:50.026 fused_ordering(109) 00:15:50.026 fused_ordering(110) 00:15:50.026 fused_ordering(111) 00:15:50.026 fused_ordering(112) 00:15:50.026 fused_ordering(113) 00:15:50.026 fused_ordering(114) 00:15:50.026 fused_ordering(115) 00:15:50.026 fused_ordering(116) 00:15:50.026 fused_ordering(117) 00:15:50.026 fused_ordering(118) 00:15:50.026 fused_ordering(119) 00:15:50.026 fused_ordering(120) 00:15:50.026 fused_ordering(121) 00:15:50.026 fused_ordering(122) 00:15:50.026 fused_ordering(123) 00:15:50.026 fused_ordering(124) 00:15:50.026 fused_ordering(125) 00:15:50.026 fused_ordering(126) 00:15:50.026 fused_ordering(127) 00:15:50.026 fused_ordering(128) 00:15:50.026 fused_ordering(129) 00:15:50.026 fused_ordering(130) 00:15:50.026 fused_ordering(131) 00:15:50.026 fused_ordering(132) 00:15:50.026 fused_ordering(133) 00:15:50.026 fused_ordering(134) 00:15:50.026 fused_ordering(135) 00:15:50.026 fused_ordering(136) 00:15:50.026 fused_ordering(137) 00:15:50.026 fused_ordering(138) 00:15:50.026 fused_ordering(139) 00:15:50.026 fused_ordering(140) 00:15:50.026 fused_ordering(141) 00:15:50.026 fused_ordering(142) 00:15:50.026 fused_ordering(143) 00:15:50.026 fused_ordering(144) 00:15:50.026 fused_ordering(145) 00:15:50.026 fused_ordering(146) 00:15:50.026 fused_ordering(147) 00:15:50.027 fused_ordering(148) 00:15:50.027 fused_ordering(149) 00:15:50.027 fused_ordering(150) 00:15:50.027 fused_ordering(151) 00:15:50.027 fused_ordering(152) 00:15:50.027 fused_ordering(153) 00:15:50.027 fused_ordering(154) 00:15:50.027 fused_ordering(155) 00:15:50.027 fused_ordering(156) 00:15:50.027 fused_ordering(157) 00:15:50.027 fused_ordering(158) 00:15:50.027 fused_ordering(159) 00:15:50.027 fused_ordering(160) 00:15:50.027 fused_ordering(161) 00:15:50.027 fused_ordering(162) 00:15:50.027 fused_ordering(163) 00:15:50.027 fused_ordering(164) 00:15:50.027 fused_ordering(165) 00:15:50.027 fused_ordering(166) 00:15:50.027 fused_ordering(167) 00:15:50.027 fused_ordering(168) 00:15:50.027 fused_ordering(169) 00:15:50.027 fused_ordering(170) 00:15:50.027 fused_ordering(171) 00:15:50.027 fused_ordering(172) 00:15:50.027 fused_ordering(173) 00:15:50.027 fused_ordering(174) 00:15:50.027 fused_ordering(175) 00:15:50.027 fused_ordering(176) 00:15:50.027 fused_ordering(177) 00:15:50.027 fused_ordering(178) 00:15:50.027 fused_ordering(179) 00:15:50.027 fused_ordering(180) 00:15:50.027 fused_ordering(181) 00:15:50.027 fused_ordering(182) 00:15:50.027 fused_ordering(183) 00:15:50.027 fused_ordering(184) 00:15:50.027 fused_ordering(185) 00:15:50.027 fused_ordering(186) 00:15:50.027 fused_ordering(187) 00:15:50.027 fused_ordering(188) 00:15:50.027 fused_ordering(189) 00:15:50.027 fused_ordering(190) 00:15:50.027 fused_ordering(191) 00:15:50.027 fused_ordering(192) 00:15:50.027 fused_ordering(193) 00:15:50.027 fused_ordering(194) 00:15:50.027 fused_ordering(195) 00:15:50.027 fused_ordering(196) 00:15:50.027 fused_ordering(197) 00:15:50.027 fused_ordering(198) 00:15:50.027 fused_ordering(199) 00:15:50.027 fused_ordering(200) 00:15:50.027 fused_ordering(201) 00:15:50.027 fused_ordering(202) 00:15:50.027 fused_ordering(203) 00:15:50.027 fused_ordering(204) 00:15:50.027 fused_ordering(205) 00:15:50.593 fused_ordering(206) 00:15:50.593 fused_ordering(207) 00:15:50.593 fused_ordering(208) 00:15:50.593 fused_ordering(209) 00:15:50.593 fused_ordering(210) 00:15:50.593 fused_ordering(211) 00:15:50.593 fused_ordering(212) 00:15:50.593 fused_ordering(213) 00:15:50.593 fused_ordering(214) 00:15:50.593 fused_ordering(215) 00:15:50.593 fused_ordering(216) 00:15:50.593 fused_ordering(217) 00:15:50.593 fused_ordering(218) 00:15:50.593 fused_ordering(219) 00:15:50.593 fused_ordering(220) 00:15:50.593 fused_ordering(221) 00:15:50.593 fused_ordering(222) 00:15:50.593 fused_ordering(223) 00:15:50.593 fused_ordering(224) 00:15:50.593 fused_ordering(225) 00:15:50.593 fused_ordering(226) 00:15:50.593 fused_ordering(227) 00:15:50.593 fused_ordering(228) 00:15:50.594 fused_ordering(229) 00:15:50.594 fused_ordering(230) 00:15:50.594 fused_ordering(231) 00:15:50.594 fused_ordering(232) 00:15:50.594 fused_ordering(233) 00:15:50.594 fused_ordering(234) 00:15:50.594 fused_ordering(235) 00:15:50.594 fused_ordering(236) 00:15:50.594 fused_ordering(237) 00:15:50.594 fused_ordering(238) 00:15:50.594 fused_ordering(239) 00:15:50.594 fused_ordering(240) 00:15:50.594 fused_ordering(241) 00:15:50.594 fused_ordering(242) 00:15:50.594 fused_ordering(243) 00:15:50.594 fused_ordering(244) 00:15:50.594 fused_ordering(245) 00:15:50.594 fused_ordering(246) 00:15:50.594 fused_ordering(247) 00:15:50.594 fused_ordering(248) 00:15:50.594 fused_ordering(249) 00:15:50.594 fused_ordering(250) 00:15:50.594 fused_ordering(251) 00:15:50.594 fused_ordering(252) 00:15:50.594 fused_ordering(253) 00:15:50.594 fused_ordering(254) 00:15:50.594 fused_ordering(255) 00:15:50.594 fused_ordering(256) 00:15:50.594 fused_ordering(257) 00:15:50.594 fused_ordering(258) 00:15:50.594 fused_ordering(259) 00:15:50.594 fused_ordering(260) 00:15:50.594 fused_ordering(261) 00:15:50.594 fused_ordering(262) 00:15:50.594 fused_ordering(263) 00:15:50.594 fused_ordering(264) 00:15:50.594 fused_ordering(265) 00:15:50.594 fused_ordering(266) 00:15:50.594 fused_ordering(267) 00:15:50.594 fused_ordering(268) 00:15:50.594 fused_ordering(269) 00:15:50.594 fused_ordering(270) 00:15:50.594 fused_ordering(271) 00:15:50.594 fused_ordering(272) 00:15:50.594 fused_ordering(273) 00:15:50.594 fused_ordering(274) 00:15:50.594 fused_ordering(275) 00:15:50.594 fused_ordering(276) 00:15:50.594 fused_ordering(277) 00:15:50.594 fused_ordering(278) 00:15:50.594 fused_ordering(279) 00:15:50.594 fused_ordering(280) 00:15:50.594 fused_ordering(281) 00:15:50.594 fused_ordering(282) 00:15:50.594 fused_ordering(283) 00:15:50.594 fused_ordering(284) 00:15:50.594 fused_ordering(285) 00:15:50.594 fused_ordering(286) 00:15:50.594 fused_ordering(287) 00:15:50.594 fused_ordering(288) 00:15:50.594 fused_ordering(289) 00:15:50.594 fused_ordering(290) 00:15:50.594 fused_ordering(291) 00:15:50.594 fused_ordering(292) 00:15:50.594 fused_ordering(293) 00:15:50.594 fused_ordering(294) 00:15:50.594 fused_ordering(295) 00:15:50.594 fused_ordering(296) 00:15:50.594 fused_ordering(297) 00:15:50.594 fused_ordering(298) 00:15:50.594 fused_ordering(299) 00:15:50.594 fused_ordering(300) 00:15:50.594 fused_ordering(301) 00:15:50.594 fused_ordering(302) 00:15:50.594 fused_ordering(303) 00:15:50.594 fused_ordering(304) 00:15:50.594 fused_ordering(305) 00:15:50.594 fused_ordering(306) 00:15:50.594 fused_ordering(307) 00:15:50.594 fused_ordering(308) 00:15:50.594 fused_ordering(309) 00:15:50.594 fused_ordering(310) 00:15:50.594 fused_ordering(311) 00:15:50.594 fused_ordering(312) 00:15:50.594 fused_ordering(313) 00:15:50.594 fused_ordering(314) 00:15:50.594 fused_ordering(315) 00:15:50.594 fused_ordering(316) 00:15:50.594 fused_ordering(317) 00:15:50.594 fused_ordering(318) 00:15:50.594 fused_ordering(319) 00:15:50.594 fused_ordering(320) 00:15:50.594 fused_ordering(321) 00:15:50.594 fused_ordering(322) 00:15:50.594 fused_ordering(323) 00:15:50.594 fused_ordering(324) 00:15:50.594 fused_ordering(325) 00:15:50.594 fused_ordering(326) 00:15:50.594 fused_ordering(327) 00:15:50.594 fused_ordering(328) 00:15:50.594 fused_ordering(329) 00:15:50.594 fused_ordering(330) 00:15:50.594 fused_ordering(331) 00:15:50.594 fused_ordering(332) 00:15:50.594 fused_ordering(333) 00:15:50.594 fused_ordering(334) 00:15:50.594 fused_ordering(335) 00:15:50.594 fused_ordering(336) 00:15:50.594 fused_ordering(337) 00:15:50.594 fused_ordering(338) 00:15:50.594 fused_ordering(339) 00:15:50.594 fused_ordering(340) 00:15:50.594 fused_ordering(341) 00:15:50.594 fused_ordering(342) 00:15:50.594 fused_ordering(343) 00:15:50.594 fused_ordering(344) 00:15:50.594 fused_ordering(345) 00:15:50.594 fused_ordering(346) 00:15:50.594 fused_ordering(347) 00:15:50.594 fused_ordering(348) 00:15:50.594 fused_ordering(349) 00:15:50.594 fused_ordering(350) 00:15:50.594 fused_ordering(351) 00:15:50.594 fused_ordering(352) 00:15:50.594 fused_ordering(353) 00:15:50.594 fused_ordering(354) 00:15:50.594 fused_ordering(355) 00:15:50.594 fused_ordering(356) 00:15:50.594 fused_ordering(357) 00:15:50.594 fused_ordering(358) 00:15:50.594 fused_ordering(359) 00:15:50.594 fused_ordering(360) 00:15:50.594 fused_ordering(361) 00:15:50.594 fused_ordering(362) 00:15:50.594 fused_ordering(363) 00:15:50.594 fused_ordering(364) 00:15:50.594 fused_ordering(365) 00:15:50.594 fused_ordering(366) 00:15:50.594 fused_ordering(367) 00:15:50.594 fused_ordering(368) 00:15:50.594 fused_ordering(369) 00:15:50.594 fused_ordering(370) 00:15:50.594 fused_ordering(371) 00:15:50.594 fused_ordering(372) 00:15:50.594 fused_ordering(373) 00:15:50.594 fused_ordering(374) 00:15:50.594 fused_ordering(375) 00:15:50.594 fused_ordering(376) 00:15:50.594 fused_ordering(377) 00:15:50.594 fused_ordering(378) 00:15:50.594 fused_ordering(379) 00:15:50.594 fused_ordering(380) 00:15:50.594 fused_ordering(381) 00:15:50.594 fused_ordering(382) 00:15:50.594 fused_ordering(383) 00:15:50.594 fused_ordering(384) 00:15:50.594 fused_ordering(385) 00:15:50.594 fused_ordering(386) 00:15:50.594 fused_ordering(387) 00:15:50.594 fused_ordering(388) 00:15:50.594 fused_ordering(389) 00:15:50.594 fused_ordering(390) 00:15:50.594 fused_ordering(391) 00:15:50.594 fused_ordering(392) 00:15:50.594 fused_ordering(393) 00:15:50.594 fused_ordering(394) 00:15:50.594 fused_ordering(395) 00:15:50.594 fused_ordering(396) 00:15:50.594 fused_ordering(397) 00:15:50.594 fused_ordering(398) 00:15:50.594 fused_ordering(399) 00:15:50.594 fused_ordering(400) 00:15:50.594 fused_ordering(401) 00:15:50.594 fused_ordering(402) 00:15:50.594 fused_ordering(403) 00:15:50.594 fused_ordering(404) 00:15:50.594 fused_ordering(405) 00:15:50.594 fused_ordering(406) 00:15:50.594 fused_ordering(407) 00:15:50.594 fused_ordering(408) 00:15:50.594 fused_ordering(409) 00:15:50.594 fused_ordering(410) 00:15:51.535 fused_ordering(411) 00:15:51.535 fused_ordering(412) 00:15:51.535 fused_ordering(413) 00:15:51.535 fused_ordering(414) 00:15:51.535 fused_ordering(415) 00:15:51.535 fused_ordering(416) 00:15:51.535 fused_ordering(417) 00:15:51.535 fused_ordering(418) 00:15:51.535 fused_ordering(419) 00:15:51.535 fused_ordering(420) 00:15:51.535 fused_ordering(421) 00:15:51.535 fused_ordering(422) 00:15:51.535 fused_ordering(423) 00:15:51.535 fused_ordering(424) 00:15:51.535 fused_ordering(425) 00:15:51.535 fused_ordering(426) 00:15:51.535 fused_ordering(427) 00:15:51.535 fused_ordering(428) 00:15:51.535 fused_ordering(429) 00:15:51.535 fused_ordering(430) 00:15:51.535 fused_ordering(431) 00:15:51.535 fused_ordering(432) 00:15:51.535 fused_ordering(433) 00:15:51.535 fused_ordering(434) 00:15:51.535 fused_ordering(435) 00:15:51.535 fused_ordering(436) 00:15:51.535 fused_ordering(437) 00:15:51.535 fused_ordering(438) 00:15:51.535 fused_ordering(439) 00:15:51.535 fused_ordering(440) 00:15:51.535 fused_ordering(441) 00:15:51.535 fused_ordering(442) 00:15:51.535 fused_ordering(443) 00:15:51.535 fused_ordering(444) 00:15:51.535 fused_ordering(445) 00:15:51.535 fused_ordering(446) 00:15:51.535 fused_ordering(447) 00:15:51.535 fused_ordering(448) 00:15:51.535 fused_ordering(449) 00:15:51.535 fused_ordering(450) 00:15:51.535 fused_ordering(451) 00:15:51.535 fused_ordering(452) 00:15:51.535 fused_ordering(453) 00:15:51.535 fused_ordering(454) 00:15:51.535 fused_ordering(455) 00:15:51.535 fused_ordering(456) 00:15:51.535 fused_ordering(457) 00:15:51.535 fused_ordering(458) 00:15:51.535 fused_ordering(459) 00:15:51.535 fused_ordering(460) 00:15:51.535 fused_ordering(461) 00:15:51.535 fused_ordering(462) 00:15:51.535 fused_ordering(463) 00:15:51.535 fused_ordering(464) 00:15:51.535 fused_ordering(465) 00:15:51.535 fused_ordering(466) 00:15:51.535 fused_ordering(467) 00:15:51.535 fused_ordering(468) 00:15:51.535 fused_ordering(469) 00:15:51.535 fused_ordering(470) 00:15:51.535 fused_ordering(471) 00:15:51.535 fused_ordering(472) 00:15:51.535 fused_ordering(473) 00:15:51.535 fused_ordering(474) 00:15:51.535 fused_ordering(475) 00:15:51.535 fused_ordering(476) 00:15:51.535 fused_ordering(477) 00:15:51.535 fused_ordering(478) 00:15:51.535 fused_ordering(479) 00:15:51.535 fused_ordering(480) 00:15:51.535 fused_ordering(481) 00:15:51.535 fused_ordering(482) 00:15:51.535 fused_ordering(483) 00:15:51.535 fused_ordering(484) 00:15:51.535 fused_ordering(485) 00:15:51.535 fused_ordering(486) 00:15:51.535 fused_ordering(487) 00:15:51.535 fused_ordering(488) 00:15:51.535 fused_ordering(489) 00:15:51.535 fused_ordering(490) 00:15:51.535 fused_ordering(491) 00:15:51.535 fused_ordering(492) 00:15:51.535 fused_ordering(493) 00:15:51.535 fused_ordering(494) 00:15:51.535 fused_ordering(495) 00:15:51.535 fused_ordering(496) 00:15:51.535 fused_ordering(497) 00:15:51.535 fused_ordering(498) 00:15:51.535 fused_ordering(499) 00:15:51.535 fused_ordering(500) 00:15:51.535 fused_ordering(501) 00:15:51.535 fused_ordering(502) 00:15:51.535 fused_ordering(503) 00:15:51.535 fused_ordering(504) 00:15:51.535 fused_ordering(505) 00:15:51.535 fused_ordering(506) 00:15:51.535 fused_ordering(507) 00:15:51.535 fused_ordering(508) 00:15:51.535 fused_ordering(509) 00:15:51.535 fused_ordering(510) 00:15:51.535 fused_ordering(511) 00:15:51.535 fused_ordering(512) 00:15:51.535 fused_ordering(513) 00:15:51.535 fused_ordering(514) 00:15:51.535 fused_ordering(515) 00:15:51.535 fused_ordering(516) 00:15:51.535 fused_ordering(517) 00:15:51.535 fused_ordering(518) 00:15:51.535 fused_ordering(519) 00:15:51.535 fused_ordering(520) 00:15:51.535 fused_ordering(521) 00:15:51.535 fused_ordering(522) 00:15:51.535 fused_ordering(523) 00:15:51.535 fused_ordering(524) 00:15:51.535 fused_ordering(525) 00:15:51.535 fused_ordering(526) 00:15:51.535 fused_ordering(527) 00:15:51.535 fused_ordering(528) 00:15:51.535 fused_ordering(529) 00:15:51.535 fused_ordering(530) 00:15:51.535 fused_ordering(531) 00:15:51.535 fused_ordering(532) 00:15:51.535 fused_ordering(533) 00:15:51.535 fused_ordering(534) 00:15:51.535 fused_ordering(535) 00:15:51.535 fused_ordering(536) 00:15:51.535 fused_ordering(537) 00:15:51.535 fused_ordering(538) 00:15:51.535 fused_ordering(539) 00:15:51.535 fused_ordering(540) 00:15:51.535 fused_ordering(541) 00:15:51.535 fused_ordering(542) 00:15:51.535 fused_ordering(543) 00:15:51.535 fused_ordering(544) 00:15:51.535 fused_ordering(545) 00:15:51.535 fused_ordering(546) 00:15:51.535 fused_ordering(547) 00:15:51.535 fused_ordering(548) 00:15:51.535 fused_ordering(549) 00:15:51.535 fused_ordering(550) 00:15:51.535 fused_ordering(551) 00:15:51.535 fused_ordering(552) 00:15:51.535 fused_ordering(553) 00:15:51.535 fused_ordering(554) 00:15:51.535 fused_ordering(555) 00:15:51.535 fused_ordering(556) 00:15:51.535 fused_ordering(557) 00:15:51.535 fused_ordering(558) 00:15:51.535 fused_ordering(559) 00:15:51.535 fused_ordering(560) 00:15:51.535 fused_ordering(561) 00:15:51.535 fused_ordering(562) 00:15:51.535 fused_ordering(563) 00:15:51.535 fused_ordering(564) 00:15:51.535 fused_ordering(565) 00:15:51.535 fused_ordering(566) 00:15:51.535 fused_ordering(567) 00:15:51.535 fused_ordering(568) 00:15:51.535 fused_ordering(569) 00:15:51.535 fused_ordering(570) 00:15:51.535 fused_ordering(571) 00:15:51.535 fused_ordering(572) 00:15:51.535 fused_ordering(573) 00:15:51.535 fused_ordering(574) 00:15:51.535 fused_ordering(575) 00:15:51.535 fused_ordering(576) 00:15:51.535 fused_ordering(577) 00:15:51.535 fused_ordering(578) 00:15:51.535 fused_ordering(579) 00:15:51.535 fused_ordering(580) 00:15:51.535 fused_ordering(581) 00:15:51.535 fused_ordering(582) 00:15:51.535 fused_ordering(583) 00:15:51.535 fused_ordering(584) 00:15:51.535 fused_ordering(585) 00:15:51.535 fused_ordering(586) 00:15:51.535 fused_ordering(587) 00:15:51.535 fused_ordering(588) 00:15:51.535 fused_ordering(589) 00:15:51.536 fused_ordering(590) 00:15:51.536 fused_ordering(591) 00:15:51.536 fused_ordering(592) 00:15:51.536 fused_ordering(593) 00:15:51.536 fused_ordering(594) 00:15:51.536 fused_ordering(595) 00:15:51.536 fused_ordering(596) 00:15:51.536 fused_ordering(597) 00:15:51.536 fused_ordering(598) 00:15:51.536 fused_ordering(599) 00:15:51.536 fused_ordering(600) 00:15:51.536 fused_ordering(601) 00:15:51.536 fused_ordering(602) 00:15:51.536 fused_ordering(603) 00:15:51.536 fused_ordering(604) 00:15:51.536 fused_ordering(605) 00:15:51.536 fused_ordering(606) 00:15:51.536 fused_ordering(607) 00:15:51.536 fused_ordering(608) 00:15:51.536 fused_ordering(609) 00:15:51.536 fused_ordering(610) 00:15:51.536 fused_ordering(611) 00:15:51.536 fused_ordering(612) 00:15:51.536 fused_ordering(613) 00:15:51.536 fused_ordering(614) 00:15:51.536 fused_ordering(615) 00:15:52.105 fused_ordering(616) 00:15:52.105 fused_ordering(617) 00:15:52.105 fused_ordering(618) 00:15:52.105 fused_ordering(619) 00:15:52.105 fused_ordering(620) 00:15:52.105 fused_ordering(621) 00:15:52.105 fused_ordering(622) 00:15:52.105 fused_ordering(623) 00:15:52.105 fused_ordering(624) 00:15:52.105 fused_ordering(625) 00:15:52.105 fused_ordering(626) 00:15:52.105 fused_ordering(627) 00:15:52.105 fused_ordering(628) 00:15:52.105 fused_ordering(629) 00:15:52.105 fused_ordering(630) 00:15:52.105 fused_ordering(631) 00:15:52.105 fused_ordering(632) 00:15:52.105 fused_ordering(633) 00:15:52.105 fused_ordering(634) 00:15:52.105 fused_ordering(635) 00:15:52.105 fused_ordering(636) 00:15:52.105 fused_ordering(637) 00:15:52.105 fused_ordering(638) 00:15:52.105 fused_ordering(639) 00:15:52.105 fused_ordering(640) 00:15:52.105 fused_ordering(641) 00:15:52.105 fused_ordering(642) 00:15:52.105 fused_ordering(643) 00:15:52.105 fused_ordering(644) 00:15:52.105 fused_ordering(645) 00:15:52.105 fused_ordering(646) 00:15:52.105 fused_ordering(647) 00:15:52.105 fused_ordering(648) 00:15:52.105 fused_ordering(649) 00:15:52.105 fused_ordering(650) 00:15:52.105 fused_ordering(651) 00:15:52.105 fused_ordering(652) 00:15:52.105 fused_ordering(653) 00:15:52.105 fused_ordering(654) 00:15:52.105 fused_ordering(655) 00:15:52.105 fused_ordering(656) 00:15:52.105 fused_ordering(657) 00:15:52.105 fused_ordering(658) 00:15:52.105 fused_ordering(659) 00:15:52.105 fused_ordering(660) 00:15:52.105 fused_ordering(661) 00:15:52.105 fused_ordering(662) 00:15:52.105 fused_ordering(663) 00:15:52.105 fused_ordering(664) 00:15:52.105 fused_ordering(665) 00:15:52.105 fused_ordering(666) 00:15:52.105 fused_ordering(667) 00:15:52.105 fused_ordering(668) 00:15:52.105 fused_ordering(669) 00:15:52.105 fused_ordering(670) 00:15:52.105 fused_ordering(671) 00:15:52.105 fused_ordering(672) 00:15:52.105 fused_ordering(673) 00:15:52.105 fused_ordering(674) 00:15:52.105 fused_ordering(675) 00:15:52.105 fused_ordering(676) 00:15:52.105 fused_ordering(677) 00:15:52.105 fused_ordering(678) 00:15:52.105 fused_ordering(679) 00:15:52.105 fused_ordering(680) 00:15:52.105 fused_ordering(681) 00:15:52.105 fused_ordering(682) 00:15:52.105 fused_ordering(683) 00:15:52.105 fused_ordering(684) 00:15:52.105 fused_ordering(685) 00:15:52.105 fused_ordering(686) 00:15:52.105 fused_ordering(687) 00:15:52.105 fused_ordering(688) 00:15:52.105 fused_ordering(689) 00:15:52.105 fused_ordering(690) 00:15:52.105 fused_ordering(691) 00:15:52.105 fused_ordering(692) 00:15:52.105 fused_ordering(693) 00:15:52.105 fused_ordering(694) 00:15:52.105 fused_ordering(695) 00:15:52.105 fused_ordering(696) 00:15:52.105 fused_ordering(697) 00:15:52.105 fused_ordering(698) 00:15:52.105 fused_ordering(699) 00:15:52.105 fused_ordering(700) 00:15:52.105 fused_ordering(701) 00:15:52.105 fused_ordering(702) 00:15:52.105 fused_ordering(703) 00:15:52.105 fused_ordering(704) 00:15:52.105 fused_ordering(705) 00:15:52.105 fused_ordering(706) 00:15:52.105 fused_ordering(707) 00:15:52.106 fused_ordering(708) 00:15:52.106 fused_ordering(709) 00:15:52.106 fused_ordering(710) 00:15:52.106 fused_ordering(711) 00:15:52.106 fused_ordering(712) 00:15:52.106 fused_ordering(713) 00:15:52.106 fused_ordering(714) 00:15:52.106 fused_ordering(715) 00:15:52.106 fused_ordering(716) 00:15:52.106 fused_ordering(717) 00:15:52.106 fused_ordering(718) 00:15:52.106 fused_ordering(719) 00:15:52.106 fused_ordering(720) 00:15:52.106 fused_ordering(721) 00:15:52.106 fused_ordering(722) 00:15:52.106 fused_ordering(723) 00:15:52.106 fused_ordering(724) 00:15:52.106 fused_ordering(725) 00:15:52.106 fused_ordering(726) 00:15:52.106 fused_ordering(727) 00:15:52.106 fused_ordering(728) 00:15:52.106 fused_ordering(729) 00:15:52.106 fused_ordering(730) 00:15:52.106 fused_ordering(731) 00:15:52.106 fused_ordering(732) 00:15:52.106 fused_ordering(733) 00:15:52.106 fused_ordering(734) 00:15:52.106 fused_ordering(735) 00:15:52.106 fused_ordering(736) 00:15:52.106 fused_ordering(737) 00:15:52.106 fused_ordering(738) 00:15:52.106 fused_ordering(739) 00:15:52.106 fused_ordering(740) 00:15:52.106 fused_ordering(741) 00:15:52.106 fused_ordering(742) 00:15:52.106 fused_ordering(743) 00:15:52.106 fused_ordering(744) 00:15:52.106 fused_ordering(745) 00:15:52.106 fused_ordering(746) 00:15:52.106 fused_ordering(747) 00:15:52.106 fused_ordering(748) 00:15:52.106 fused_ordering(749) 00:15:52.106 fused_ordering(750) 00:15:52.106 fused_ordering(751) 00:15:52.106 fused_ordering(752) 00:15:52.106 fused_ordering(753) 00:15:52.106 fused_ordering(754) 00:15:52.106 fused_ordering(755) 00:15:52.106 fused_ordering(756) 00:15:52.106 fused_ordering(757) 00:15:52.106 fused_ordering(758) 00:15:52.106 fused_ordering(759) 00:15:52.106 fused_ordering(760) 00:15:52.106 fused_ordering(761) 00:15:52.106 fused_ordering(762) 00:15:52.106 fused_ordering(763) 00:15:52.106 fused_ordering(764) 00:15:52.106 fused_ordering(765) 00:15:52.106 fused_ordering(766) 00:15:52.106 fused_ordering(767) 00:15:52.106 fused_ordering(768) 00:15:52.106 fused_ordering(769) 00:15:52.106 fused_ordering(770) 00:15:52.106 fused_ordering(771) 00:15:52.106 fused_ordering(772) 00:15:52.106 fused_ordering(773) 00:15:52.106 fused_ordering(774) 00:15:52.106 fused_ordering(775) 00:15:52.106 fused_ordering(776) 00:15:52.106 fused_ordering(777) 00:15:52.106 fused_ordering(778) 00:15:52.106 fused_ordering(779) 00:15:52.106 fused_ordering(780) 00:15:52.106 fused_ordering(781) 00:15:52.106 fused_ordering(782) 00:15:52.106 fused_ordering(783) 00:15:52.106 fused_ordering(784) 00:15:52.106 fused_ordering(785) 00:15:52.106 fused_ordering(786) 00:15:52.106 fused_ordering(787) 00:15:52.106 fused_ordering(788) 00:15:52.106 fused_ordering(789) 00:15:52.106 fused_ordering(790) 00:15:52.106 fused_ordering(791) 00:15:52.106 fused_ordering(792) 00:15:52.106 fused_ordering(793) 00:15:52.106 fused_ordering(794) 00:15:52.106 fused_ordering(795) 00:15:52.106 fused_ordering(796) 00:15:52.106 fused_ordering(797) 00:15:52.106 fused_ordering(798) 00:15:52.106 fused_ordering(799) 00:15:52.106 fused_ordering(800) 00:15:52.106 fused_ordering(801) 00:15:52.106 fused_ordering(802) 00:15:52.106 fused_ordering(803) 00:15:52.106 fused_ordering(804) 00:15:52.106 fused_ordering(805) 00:15:52.106 fused_ordering(806) 00:15:52.106 fused_ordering(807) 00:15:52.106 fused_ordering(808) 00:15:52.106 fused_ordering(809) 00:15:52.106 fused_ordering(810) 00:15:52.106 fused_ordering(811) 00:15:52.106 fused_ordering(812) 00:15:52.106 fused_ordering(813) 00:15:52.106 fused_ordering(814) 00:15:52.106 fused_ordering(815) 00:15:52.106 fused_ordering(816) 00:15:52.106 fused_ordering(817) 00:15:52.106 fused_ordering(818) 00:15:52.106 fused_ordering(819) 00:15:52.106 fused_ordering(820) 00:15:53.047 fused_ordering(821) 00:15:53.047 fused_ordering(822) 00:15:53.047 fused_ordering(823) 00:15:53.047 fused_ordering(824) 00:15:53.047 fused_ordering(825) 00:15:53.047 fused_ordering(826) 00:15:53.047 fused_ordering(827) 00:15:53.047 fused_ordering(828) 00:15:53.047 fused_ordering(829) 00:15:53.047 fused_ordering(830) 00:15:53.047 fused_ordering(831) 00:15:53.047 fused_ordering(832) 00:15:53.047 fused_ordering(833) 00:15:53.047 fused_ordering(834) 00:15:53.047 fused_ordering(835) 00:15:53.047 fused_ordering(836) 00:15:53.047 fused_ordering(837) 00:15:53.047 fused_ordering(838) 00:15:53.047 fused_ordering(839) 00:15:53.047 fused_ordering(840) 00:15:53.047 fused_ordering(841) 00:15:53.047 fused_ordering(842) 00:15:53.047 fused_ordering(843) 00:15:53.047 fused_ordering(844) 00:15:53.047 fused_ordering(845) 00:15:53.047 fused_ordering(846) 00:15:53.047 fused_ordering(847) 00:15:53.047 fused_ordering(848) 00:15:53.047 fused_ordering(849) 00:15:53.047 fused_ordering(850) 00:15:53.047 fused_ordering(851) 00:15:53.047 fused_ordering(852) 00:15:53.047 fused_ordering(853) 00:15:53.047 fused_ordering(854) 00:15:53.047 fused_ordering(855) 00:15:53.047 fused_ordering(856) 00:15:53.047 fused_ordering(857) 00:15:53.047 fused_ordering(858) 00:15:53.047 fused_ordering(859) 00:15:53.047 fused_ordering(860) 00:15:53.047 fused_ordering(861) 00:15:53.047 fused_ordering(862) 00:15:53.047 fused_ordering(863) 00:15:53.047 fused_ordering(864) 00:15:53.047 fused_ordering(865) 00:15:53.047 fused_ordering(866) 00:15:53.047 fused_ordering(867) 00:15:53.047 fused_ordering(868) 00:15:53.047 fused_ordering(869) 00:15:53.047 fused_ordering(870) 00:15:53.047 fused_ordering(871) 00:15:53.047 fused_ordering(872) 00:15:53.047 fused_ordering(873) 00:15:53.047 fused_ordering(874) 00:15:53.047 fused_ordering(875) 00:15:53.047 fused_ordering(876) 00:15:53.047 fused_ordering(877) 00:15:53.047 fused_ordering(878) 00:15:53.047 fused_ordering(879) 00:15:53.047 fused_ordering(880) 00:15:53.047 fused_ordering(881) 00:15:53.047 fused_ordering(882) 00:15:53.047 fused_ordering(883) 00:15:53.047 fused_ordering(884) 00:15:53.047 fused_ordering(885) 00:15:53.047 fused_ordering(886) 00:15:53.047 fused_ordering(887) 00:15:53.047 fused_ordering(888) 00:15:53.047 fused_ordering(889) 00:15:53.047 fused_ordering(890) 00:15:53.047 fused_ordering(891) 00:15:53.047 fused_ordering(892) 00:15:53.047 fused_ordering(893) 00:15:53.047 fused_ordering(894) 00:15:53.047 fused_ordering(895) 00:15:53.047 fused_ordering(896) 00:15:53.047 fused_ordering(897) 00:15:53.047 fused_ordering(898) 00:15:53.047 fused_ordering(899) 00:15:53.047 fused_ordering(900) 00:15:53.047 fused_ordering(901) 00:15:53.047 fused_ordering(902) 00:15:53.047 fused_ordering(903) 00:15:53.047 fused_ordering(904) 00:15:53.047 fused_ordering(905) 00:15:53.047 fused_ordering(906) 00:15:53.047 fused_ordering(907) 00:15:53.047 fused_ordering(908) 00:15:53.047 fused_ordering(909) 00:15:53.047 fused_ordering(910) 00:15:53.047 fused_ordering(911) 00:15:53.047 fused_ordering(912) 00:15:53.047 fused_ordering(913) 00:15:53.047 fused_ordering(914) 00:15:53.047 fused_ordering(915) 00:15:53.047 fused_ordering(916) 00:15:53.047 fused_ordering(917) 00:15:53.047 fused_ordering(918) 00:15:53.047 fused_ordering(919) 00:15:53.047 fused_ordering(920) 00:15:53.047 fused_ordering(921) 00:15:53.047 fused_ordering(922) 00:15:53.047 fused_ordering(923) 00:15:53.047 fused_ordering(924) 00:15:53.047 fused_ordering(925) 00:15:53.047 fused_ordering(926) 00:15:53.047 fused_ordering(927) 00:15:53.047 fused_ordering(928) 00:15:53.047 fused_ordering(929) 00:15:53.047 fused_ordering(930) 00:15:53.047 fused_ordering(931) 00:15:53.047 fused_ordering(932) 00:15:53.047 fused_ordering(933) 00:15:53.047 fused_ordering(934) 00:15:53.047 fused_ordering(935) 00:15:53.047 fused_ordering(936) 00:15:53.047 fused_ordering(937) 00:15:53.047 fused_ordering(938) 00:15:53.047 fused_ordering(939) 00:15:53.047 fused_ordering(940) 00:15:53.047 fused_ordering(941) 00:15:53.047 fused_ordering(942) 00:15:53.047 fused_ordering(943) 00:15:53.047 fused_ordering(944) 00:15:53.047 fused_ordering(945) 00:15:53.047 fused_ordering(946) 00:15:53.047 fused_ordering(947) 00:15:53.047 fused_ordering(948) 00:15:53.047 fused_ordering(949) 00:15:53.047 fused_ordering(950) 00:15:53.047 fused_ordering(951) 00:15:53.047 fused_ordering(952) 00:15:53.047 fused_ordering(953) 00:15:53.047 fused_ordering(954) 00:15:53.047 fused_ordering(955) 00:15:53.047 fused_ordering(956) 00:15:53.047 fused_ordering(957) 00:15:53.047 fused_ordering(958) 00:15:53.047 fused_ordering(959) 00:15:53.047 fused_ordering(960) 00:15:53.047 fused_ordering(961) 00:15:53.047 fused_ordering(962) 00:15:53.047 fused_ordering(963) 00:15:53.047 fused_ordering(964) 00:15:53.047 fused_ordering(965) 00:15:53.047 fused_ordering(966) 00:15:53.047 fused_ordering(967) 00:15:53.047 fused_ordering(968) 00:15:53.047 fused_ordering(969) 00:15:53.047 fused_ordering(970) 00:15:53.047 fused_ordering(971) 00:15:53.047 fused_ordering(972) 00:15:53.047 fused_ordering(973) 00:15:53.047 fused_ordering(974) 00:15:53.047 fused_ordering(975) 00:15:53.047 fused_ordering(976) 00:15:53.047 fused_ordering(977) 00:15:53.047 fused_ordering(978) 00:15:53.047 fused_ordering(979) 00:15:53.047 fused_ordering(980) 00:15:53.047 fused_ordering(981) 00:15:53.047 fused_ordering(982) 00:15:53.047 fused_ordering(983) 00:15:53.047 fused_ordering(984) 00:15:53.047 fused_ordering(985) 00:15:53.047 fused_ordering(986) 00:15:53.047 fused_ordering(987) 00:15:53.047 fused_ordering(988) 00:15:53.047 fused_ordering(989) 00:15:53.047 fused_ordering(990) 00:15:53.047 fused_ordering(991) 00:15:53.047 fused_ordering(992) 00:15:53.047 fused_ordering(993) 00:15:53.047 fused_ordering(994) 00:15:53.047 fused_ordering(995) 00:15:53.047 fused_ordering(996) 00:15:53.047 fused_ordering(997) 00:15:53.047 fused_ordering(998) 00:15:53.047 fused_ordering(999) 00:15:53.047 fused_ordering(1000) 00:15:53.047 fused_ordering(1001) 00:15:53.047 fused_ordering(1002) 00:15:53.047 fused_ordering(1003) 00:15:53.047 fused_ordering(1004) 00:15:53.047 fused_ordering(1005) 00:15:53.047 fused_ordering(1006) 00:15:53.047 fused_ordering(1007) 00:15:53.047 fused_ordering(1008) 00:15:53.047 fused_ordering(1009) 00:15:53.047 fused_ordering(1010) 00:15:53.047 fused_ordering(1011) 00:15:53.047 fused_ordering(1012) 00:15:53.047 fused_ordering(1013) 00:15:53.047 fused_ordering(1014) 00:15:53.047 fused_ordering(1015) 00:15:53.047 fused_ordering(1016) 00:15:53.047 fused_ordering(1017) 00:15:53.047 fused_ordering(1018) 00:15:53.047 fused_ordering(1019) 00:15:53.047 fused_ordering(1020) 00:15:53.047 fused_ordering(1021) 00:15:53.047 fused_ordering(1022) 00:15:53.047 fused_ordering(1023) 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.047 02:15:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.047 rmmod nvme_tcp 00:15:53.047 rmmod nvme_fabrics 00:15:53.047 rmmod nvme_keyring 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1016015 ']' 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1016015 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1016015 ']' 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1016015 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1016015 00:15:53.047 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.048 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.048 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1016015' 00:15:53.048 killing process with pid 1016015 00:15:53.048 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1016015 00:15:53.048 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1016015 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.306 02:15:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:55.215 00:15:55.215 real 0m8.385s 00:15:55.215 user 0m6.016s 00:15:55.215 sys 0m4.115s 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.215 ************************************ 00:15:55.215 END TEST nvmf_fused_ordering 00:15:55.215 ************************************ 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.215 02:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.474 ************************************ 00:15:55.474 START TEST nvmf_ns_masking 00:15:55.474 ************************************ 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:55.474 * Looking for test storage... 00:15:55.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.474 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7b5a5ebb-68f8-42b1-8ae9-da9409e0dd4e 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=71ecce7a-73d2-46d5-9756-47b5c79f6403 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fe8554ca-7d1e-48c0-b9ee-95798712287d 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:55.475 02:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:57.423 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:57.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:57.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:57.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:57.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.424 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:15:57.424 00:15:57.424 --- 10.0.0.2 ping statistics --- 00:15:57.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.424 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:15:57.425 00:15:57.425 --- 10.0.0.1 ping statistics --- 00:15:57.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.425 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1018373 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1018373 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1018373 ']' 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.425 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.425 [2024-07-27 02:15:25.523356] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:15:57.425 [2024-07-27 02:15:25.523458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.425 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.425 [2024-07-27 02:15:25.561937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:57.683 [2024-07-27 02:15:25.594247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.683 [2024-07-27 02:15:25.684983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.683 [2024-07-27 02:15:25.685042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.683 [2024-07-27 02:15:25.685074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.683 [2024-07-27 02:15:25.685089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.683 [2024-07-27 02:15:25.685110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.683 [2024-07-27 02:15:25.685140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.683 02:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:58.248 [2024-07-27 02:15:26.106235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.249 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:58.249 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:58.249 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.249 Malloc1 00:15:58.507 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:58.765 Malloc2 00:15:58.765 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:59.024 02:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:59.024 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.589 [2024-07-27 02:15:27.464874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe8554ca-7d1e-48c0-b9ee-95798712287d -a 10.0.0.2 -s 4420 -i 4 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.589 02:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:02.118 [ 0]:0x1 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2774b724eeae427aab57236c25549ac6 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2774b724eeae427aab57236c25549ac6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.118 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:02.118 [ 0]:0x1 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2774b724eeae427aab57236c25549ac6 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2774b724eeae427aab57236c25549ac6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:02.118 [ 1]:0x2 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.118 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.376 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:02.635 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:02.635 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe8554ca-7d1e-48c0-b9ee-95798712287d -a 10.0.0.2 -s 4420 -i 4 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:02.893 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:04.794 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.052 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.052 [ 0]:0x2 00:16:05.052 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.053 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.053 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:05.053 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.053 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.310 [ 0]:0x1 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2774b724eeae427aab57236c25549ac6 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2774b724eeae427aab57236c25549ac6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.310 [ 1]:0x2 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.310 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.568 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:05.568 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.568 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.826 [ 0]:0x2 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.826 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:06.084 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:06.084 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe8554ca-7d1e-48c0-b9ee-95798712287d -a 10.0.0.2 -s 4420 -i 4 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:06.342 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.242 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.242 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.242 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:08.500 [ 0]:0x1 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2774b724eeae427aab57236c25549ac6 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2774b724eeae427aab57236c25549ac6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.500 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:08.758 [ 1]:0x2 00:16:08.758 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:08.758 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.758 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:08.758 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.758 02:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.016 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.017 [ 0]:0x2 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:09.017 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:09.275 [2024-07-27 02:15:37.330770] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:09.275 request: 00:16:09.275 { 00:16:09.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.275 "nsid": 2, 00:16:09.275 "host": "nqn.2016-06.io.spdk:host1", 00:16:09.275 "method": "nvmf_ns_remove_host", 00:16:09.275 "req_id": 1 00:16:09.275 } 00:16:09.275 Got JSON-RPC error response 00:16:09.275 response: 00:16:09.275 { 00:16:09.275 "code": -32602, 00:16:09.275 "message": "Invalid parameters" 00:16:09.275 } 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.275 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.533 [ 0]:0x2 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=685e8943ca9848fbbdaecd5bf7c4e4ce 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 685e8943ca9848fbbdaecd5bf7c4e4ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:09.533 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1019988 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1019988 /var/tmp/host.sock 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1019988 ']' 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:09.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.534 02:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:09.792 [2024-07-27 02:15:37.705365] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:16:09.792 [2024-07-27 02:15:37.705455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019988 ] 00:16:09.792 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.792 [2024-07-27 02:15:37.738498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:09.792 [2024-07-27 02:15:37.770238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.792 [2024-07-27 02:15:37.863137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.051 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.051 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:10.051 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.309 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.567 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7b5a5ebb-68f8-42b1-8ae9-da9409e0dd4e 00:16:10.568 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:10.568 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B5A5EBB68F842B18AE9DA9409E0DD4E -i 00:16:11.134 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 71ecce7a-73d2-46d5-9756-47b5c79f6403 00:16:11.134 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:11.134 02:15:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 71ECCE7A73D246D5975647B5C79F6403 -i 00:16:11.134 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:11.392 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:11.650 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:11.650 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:12.217 nvme0n1 00:16:12.217 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:12.217 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:12.783 nvme1n2 00:16:12.783 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:12.783 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:12.783 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:12.784 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:12.784 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:13.042 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:13.042 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:13.042 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:13.042 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:13.300 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7b5a5ebb-68f8-42b1-8ae9-da9409e0dd4e == \7\b\5\a\5\e\b\b\-\6\8\f\8\-\4\2\b\1\-\8\a\e\9\-\d\a\9\4\0\9\e\0\d\d\4\e ]] 00:16:13.300 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:13.300 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:13.300 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 71ecce7a-73d2-46d5-9756-47b5c79f6403 == \7\1\e\c\c\e\7\a\-\7\3\d\2\-\4\6\d\5\-\9\7\5\6\-\4\7\b\5\c\7\9\f\6\4\0\3 ]] 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1019988 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1019988 ']' 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1019988 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019988 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019988' 00:16:13.558 killing process with pid 1019988 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1019988 00:16:13.558 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1019988 00:16:14.125 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.384 rmmod nvme_tcp 00:16:14.384 rmmod nvme_fabrics 00:16:14.384 rmmod nvme_keyring 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1018373 ']' 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1018373 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1018373 ']' 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1018373 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1018373 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1018373' 00:16:14.384 killing process with pid 1018373 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1018373 00:16:14.384 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1018373 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.646 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.182 00:16:17.182 real 0m21.392s 00:16:17.182 user 0m28.286s 00:16:17.182 sys 0m4.059s 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.182 ************************************ 00:16:17.182 END TEST nvmf_ns_masking 00:16:17.182 ************************************ 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.182 ************************************ 00:16:17.182 START TEST nvmf_nvme_cli 00:16:17.182 ************************************ 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:17.182 * Looking for test storage... 00:16:17.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:17.182 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.183 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:19.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:19.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:19.086 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.086 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:19.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.087 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:16:19.087 00:16:19.087 --- 10.0.0.2 ping statistics --- 00:16:19.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.087 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:16:19.087 00:16:19.087 --- 10.0.0.1 ping statistics --- 00:16:19.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.087 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1022487 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1022487 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1022487 ']' 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.087 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.087 [2024-07-27 02:15:47.091357] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:16:19.087 [2024-07-27 02:15:47.091460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.087 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.087 [2024-07-27 02:15:47.128417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.087 [2024-07-27 02:15:47.160204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.345 [2024-07-27 02:15:47.257129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.345 [2024-07-27 02:15:47.257187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.345 [2024-07-27 02:15:47.257211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.345 [2024-07-27 02:15:47.257225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.345 [2024-07-27 02:15:47.257237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.345 [2024-07-27 02:15:47.257312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.345 [2024-07-27 02:15:47.257373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.346 [2024-07-27 02:15:47.257436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.346 [2024-07-27 02:15:47.257439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 [2024-07-27 02:15:47.421747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 Malloc0 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 Malloc1 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.346 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.604 [2024-07-27 02:15:47.508034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:19.604 00:16:19.604 Discovery Log Number of Records 2, Generation counter 2 00:16:19.604 =====Discovery Log Entry 0====== 00:16:19.604 trtype: tcp 00:16:19.604 adrfam: ipv4 00:16:19.604 subtype: current discovery subsystem 00:16:19.604 treq: not required 00:16:19.604 portid: 0 00:16:19.604 trsvcid: 4420 00:16:19.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:19.604 traddr: 10.0.0.2 00:16:19.604 eflags: explicit discovery connections, duplicate discovery information 00:16:19.604 sectype: none 00:16:19.604 =====Discovery Log Entry 1====== 00:16:19.604 trtype: tcp 00:16:19.604 adrfam: ipv4 00:16:19.604 subtype: nvme subsystem 00:16:19.604 treq: not required 00:16:19.604 portid: 0 00:16:19.604 trsvcid: 4420 00:16:19.604 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:19.604 traddr: 10.0.0.2 00:16:19.604 eflags: none 00:16:19.604 sectype: none 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:19.604 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:20.171 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.066 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:22.323 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:22.324 /dev/nvme0n1 ]] 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.324 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:22.581 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.840 rmmod nvme_tcp 00:16:22.840 rmmod nvme_fabrics 00:16:22.840 rmmod nvme_keyring 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1022487 ']' 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1022487 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1022487 ']' 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1022487 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1022487 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1022487' 00:16:22.840 killing process with pid 1022487 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1022487 00:16:22.840 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1022487 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.098 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:25.634 00:16:25.634 real 0m8.393s 00:16:25.634 user 0m16.029s 00:16:25.634 sys 0m2.247s 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:25.634 ************************************ 00:16:25.634 END TEST nvmf_nvme_cli 00:16:25.634 ************************************ 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.634 ************************************ 00:16:25.634 START TEST nvmf_vfio_user 00:16:25.634 ************************************ 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:25.634 * Looking for test storage... 00:16:25.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.634 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1023374 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1023374' 00:16:25.635 Process pid: 1023374 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1023374 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1023374 ']' 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:25.635 [2024-07-27 02:15:53.404478] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:16:25.635 [2024-07-27 02:15:53.404578] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.635 [2024-07-27 02:15:53.446563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:25.635 [2024-07-27 02:15:53.495363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.635 [2024-07-27 02:15:53.591142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.635 [2024-07-27 02:15:53.591208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.635 [2024-07-27 02:15:53.591233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.635 [2024-07-27 02:15:53.591256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.635 [2024-07-27 02:15:53.591276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.635 [2024-07-27 02:15:53.591344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.635 [2024-07-27 02:15:53.591406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.635 [2024-07-27 02:15:53.591471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.635 [2024-07-27 02:15:53.591479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:25.635 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:27.003 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:27.261 Malloc1 00:16:27.261 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:27.519 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:27.776 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:28.034 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.034 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:28.034 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:28.292 Malloc2 00:16:28.292 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:28.549 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:28.807 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:29.066 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:29.066 [2024-07-27 02:15:57.036057] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:16:29.066 [2024-07-27 02:15:57.036131] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023828 ] 00:16:29.066 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.066 [2024-07-27 02:15:57.054812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:29.066 [2024-07-27 02:15:57.072537] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:29.066 [2024-07-27 02:15:57.078565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:29.066 [2024-07-27 02:15:57.078598] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f00fd68f000 00:16:29.066 [2024-07-27 02:15:57.079558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.080554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.081558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.082562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.083569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.084567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.085579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.086586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.066 [2024-07-27 02:15:57.087599] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:29.066 [2024-07-27 02:15:57.087620] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f00fc451000 00:16:29.066 [2024-07-27 02:15:57.088738] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:29.066 [2024-07-27 02:15:57.102575] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:29.066 [2024-07-27 02:15:57.102613] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:29.066 [2024-07-27 02:15:57.107741] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:29.066 [2024-07-27 02:15:57.107793] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:29.066 [2024-07-27 02:15:57.107884] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:29.066 [2024-07-27 02:15:57.107909] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:29.066 [2024-07-27 02:15:57.107919] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:29.066 [2024-07-27 02:15:57.108720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:29.066 [2024-07-27 02:15:57.108744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:29.066 [2024-07-27 02:15:57.108756] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:29.066 [2024-07-27 02:15:57.109724] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:29.066 [2024-07-27 02:15:57.109742] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:29.066 [2024-07-27 02:15:57.109754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.110728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:29.066 [2024-07-27 02:15:57.110747] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.111731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:29.066 [2024-07-27 02:15:57.111748] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:29.066 [2024-07-27 02:15:57.111757] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.111768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.111877] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:29.066 [2024-07-27 02:15:57.111885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.111893] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:29.066 [2024-07-27 02:15:57.112740] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:29.066 [2024-07-27 02:15:57.113747] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:29.066 [2024-07-27 02:15:57.114748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:29.066 [2024-07-27 02:15:57.115747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.066 [2024-07-27 02:15:57.115844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:29.066 [2024-07-27 02:15:57.116762] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:29.066 [2024-07-27 02:15:57.116780] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:29.066 [2024-07-27 02:15:57.116789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:29.066 [2024-07-27 02:15:57.116812] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:29.067 [2024-07-27 02:15:57.116829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.116853] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.067 [2024-07-27 02:15:57.116862] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.067 [2024-07-27 02:15:57.116868] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.067 [2024-07-27 02:15:57.116886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.116949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.116964] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:29.067 [2024-07-27 02:15:57.116972] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:29.067 [2024-07-27 02:15:57.116979] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:29.067 [2024-07-27 02:15:57.116986] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:29.067 [2024-07-27 02:15:57.116994] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:29.067 [2024-07-27 02:15:57.117001] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:29.067 [2024-07-27 02:15:57.117009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.067 [2024-07-27 02:15:57.117119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.067 [2024-07-27 02:15:57.117131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.067 [2024-07-27 02:15:57.117143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.067 [2024-07-27 02:15:57.117154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117206] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:29.067 [2024-07-27 02:15:57.117215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117242] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117377] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:29.067 [2024-07-27 02:15:57.117385] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:29.067 [2024-07-27 02:15:57.117391] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.067 [2024-07-27 02:15:57.117401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117434] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:29.067 [2024-07-27 02:15:57.117449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117463] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117474] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.067 [2024-07-27 02:15:57.117481] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.067 [2024-07-27 02:15:57.117487] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.067 [2024-07-27 02:15:57.117496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117570] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.067 [2024-07-27 02:15:57.117578] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.067 [2024-07-27 02:15:57.117584] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.067 [2024-07-27 02:15:57.117593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117678] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:29.067 [2024-07-27 02:15:57.117685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:29.067 [2024-07-27 02:15:57.117693] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:29.067 [2024-07-27 02:15:57.117719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:29.067 [2024-07-27 02:15:57.117820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:29.067 [2024-07-27 02:15:57.117841] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:29.067 [2024-07-27 02:15:57.117851] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:29.067 [2024-07-27 02:15:57.117860] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:29.068 [2024-07-27 02:15:57.117867] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:29.068 [2024-07-27 02:15:57.117873] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:29.068 [2024-07-27 02:15:57.117882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:29.068 [2024-07-27 02:15:57.117893] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:29.068 [2024-07-27 02:15:57.117901] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:29.068 [2024-07-27 02:15:57.117907] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.068 [2024-07-27 02:15:57.117915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:29.068 [2024-07-27 02:15:57.117926] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:29.068 [2024-07-27 02:15:57.117934] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.068 [2024-07-27 02:15:57.117939] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.068 [2024-07-27 02:15:57.117948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.068 [2024-07-27 02:15:57.117959] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:29.068 [2024-07-27 02:15:57.117967] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:29.068 [2024-07-27 02:15:57.117973] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.068 [2024-07-27 02:15:57.117981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:29.068 [2024-07-27 02:15:57.117992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:29.068 [2024-07-27 02:15:57.118012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:29.068 [2024-07-27 02:15:57.118029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:29.068 [2024-07-27 02:15:57.118041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:29.068 ===================================================== 00:16:29.068 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.068 ===================================================== 00:16:29.068 Controller Capabilities/Features 00:16:29.068 ================================ 00:16:29.068 Vendor ID: 4e58 00:16:29.068 Subsystem Vendor ID: 4e58 00:16:29.068 Serial Number: SPDK1 00:16:29.068 Model Number: SPDK bdev Controller 00:16:29.068 Firmware Version: 24.09 00:16:29.068 Recommended Arb Burst: 6 00:16:29.068 IEEE OUI Identifier: 8d 6b 50 00:16:29.068 Multi-path I/O 00:16:29.068 May have multiple subsystem ports: Yes 00:16:29.068 May have multiple controllers: Yes 00:16:29.068 Associated with SR-IOV VF: No 00:16:29.068 Max Data Transfer Size: 131072 00:16:29.068 Max Number of Namespaces: 32 00:16:29.068 Max Number of I/O Queues: 127 00:16:29.068 NVMe Specification Version (VS): 1.3 00:16:29.068 NVMe Specification Version (Identify): 1.3 00:16:29.068 Maximum Queue Entries: 256 00:16:29.068 Contiguous Queues Required: Yes 00:16:29.068 Arbitration Mechanisms Supported 00:16:29.068 Weighted Round Robin: Not Supported 00:16:29.068 Vendor Specific: Not Supported 00:16:29.068 Reset Timeout: 15000 ms 00:16:29.068 Doorbell Stride: 4 bytes 00:16:29.068 NVM Subsystem Reset: Not Supported 00:16:29.068 Command Sets Supported 00:16:29.068 NVM Command Set: Supported 00:16:29.068 Boot Partition: Not Supported 00:16:29.068 Memory Page Size Minimum: 4096 bytes 00:16:29.068 Memory Page Size Maximum: 4096 bytes 00:16:29.068 Persistent Memory Region: Not Supported 00:16:29.068 Optional Asynchronous Events Supported 00:16:29.068 Namespace Attribute Notices: Supported 00:16:29.068 Firmware Activation Notices: Not Supported 00:16:29.068 ANA Change Notices: Not Supported 00:16:29.068 PLE Aggregate Log Change Notices: Not Supported 00:16:29.068 LBA Status Info Alert Notices: Not Supported 00:16:29.068 EGE Aggregate Log Change Notices: Not Supported 00:16:29.068 Normal NVM Subsystem Shutdown event: Not Supported 00:16:29.068 Zone Descriptor Change Notices: Not Supported 00:16:29.068 Discovery Log Change Notices: Not Supported 00:16:29.068 Controller Attributes 00:16:29.068 128-bit Host Identifier: Supported 00:16:29.068 Non-Operational Permissive Mode: Not Supported 00:16:29.068 NVM Sets: Not Supported 00:16:29.068 Read Recovery Levels: Not Supported 00:16:29.068 Endurance Groups: Not Supported 00:16:29.068 Predictable Latency Mode: Not Supported 00:16:29.068 Traffic Based Keep ALive: Not Supported 00:16:29.068 Namespace Granularity: Not Supported 00:16:29.068 SQ Associations: Not Supported 00:16:29.068 UUID List: Not Supported 00:16:29.068 Multi-Domain Subsystem: Not Supported 00:16:29.068 Fixed Capacity Management: Not Supported 00:16:29.068 Variable Capacity Management: Not Supported 00:16:29.068 Delete Endurance Group: Not Supported 00:16:29.068 Delete NVM Set: Not Supported 00:16:29.068 Extended LBA Formats Supported: Not Supported 00:16:29.068 Flexible Data Placement Supported: Not Supported 00:16:29.068 00:16:29.068 Controller Memory Buffer Support 00:16:29.068 ================================ 00:16:29.068 Supported: No 00:16:29.068 00:16:29.068 Persistent Memory Region Support 00:16:29.068 ================================ 00:16:29.068 Supported: No 00:16:29.068 00:16:29.068 Admin Command Set Attributes 00:16:29.068 ============================ 00:16:29.068 Security Send/Receive: Not Supported 00:16:29.068 Format NVM: Not Supported 00:16:29.068 Firmware Activate/Download: Not Supported 00:16:29.068 Namespace Management: Not Supported 00:16:29.068 Device Self-Test: Not Supported 00:16:29.068 Directives: Not Supported 00:16:29.068 NVMe-MI: Not Supported 00:16:29.068 Virtualization Management: Not Supported 00:16:29.068 Doorbell Buffer Config: Not Supported 00:16:29.068 Get LBA Status Capability: Not Supported 00:16:29.068 Command & Feature Lockdown Capability: Not Supported 00:16:29.068 Abort Command Limit: 4 00:16:29.068 Async Event Request Limit: 4 00:16:29.068 Number of Firmware Slots: N/A 00:16:29.068 Firmware Slot 1 Read-Only: N/A 00:16:29.068 Firmware Activation Without Reset: N/A 00:16:29.068 Multiple Update Detection Support: N/A 00:16:29.068 Firmware Update Granularity: No Information Provided 00:16:29.068 Per-Namespace SMART Log: No 00:16:29.068 Asymmetric Namespace Access Log Page: Not Supported 00:16:29.068 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:29.068 Command Effects Log Page: Supported 00:16:29.068 Get Log Page Extended Data: Supported 00:16:29.068 Telemetry Log Pages: Not Supported 00:16:29.068 Persistent Event Log Pages: Not Supported 00:16:29.068 Supported Log Pages Log Page: May Support 00:16:29.068 Commands Supported & Effects Log Page: Not Supported 00:16:29.068 Feature Identifiers & Effects Log Page:May Support 00:16:29.068 NVMe-MI Commands & Effects Log Page: May Support 00:16:29.068 Data Area 4 for Telemetry Log: Not Supported 00:16:29.068 Error Log Page Entries Supported: 128 00:16:29.068 Keep Alive: Supported 00:16:29.068 Keep Alive Granularity: 10000 ms 00:16:29.068 00:16:29.068 NVM Command Set Attributes 00:16:29.068 ========================== 00:16:29.068 Submission Queue Entry Size 00:16:29.068 Max: 64 00:16:29.068 Min: 64 00:16:29.068 Completion Queue Entry Size 00:16:29.068 Max: 16 00:16:29.068 Min: 16 00:16:29.068 Number of Namespaces: 32 00:16:29.068 Compare Command: Supported 00:16:29.068 Write Uncorrectable Command: Not Supported 00:16:29.068 Dataset Management Command: Supported 00:16:29.068 Write Zeroes Command: Supported 00:16:29.068 Set Features Save Field: Not Supported 00:16:29.068 Reservations: Not Supported 00:16:29.068 Timestamp: Not Supported 00:16:29.069 Copy: Supported 00:16:29.069 Volatile Write Cache: Present 00:16:29.069 Atomic Write Unit (Normal): 1 00:16:29.069 Atomic Write Unit (PFail): 1 00:16:29.069 Atomic Compare & Write Unit: 1 00:16:29.069 Fused Compare & Write: Supported 00:16:29.069 Scatter-Gather List 00:16:29.069 SGL Command Set: Supported (Dword aligned) 00:16:29.069 SGL Keyed: Not Supported 00:16:29.069 SGL Bit Bucket Descriptor: Not Supported 00:16:29.069 SGL Metadata Pointer: Not Supported 00:16:29.069 Oversized SGL: Not Supported 00:16:29.069 SGL Metadata Address: Not Supported 00:16:29.069 SGL Offset: Not Supported 00:16:29.069 Transport SGL Data Block: Not Supported 00:16:29.069 Replay Protected Memory Block: Not Supported 00:16:29.069 00:16:29.069 Firmware Slot Information 00:16:29.069 ========================= 00:16:29.069 Active slot: 1 00:16:29.069 Slot 1 Firmware Revision: 24.09 00:16:29.069 00:16:29.069 00:16:29.069 Commands Supported and Effects 00:16:29.069 ============================== 00:16:29.069 Admin Commands 00:16:29.069 -------------- 00:16:29.069 Get Log Page (02h): Supported 00:16:29.069 Identify (06h): Supported 00:16:29.069 Abort (08h): Supported 00:16:29.069 Set Features (09h): Supported 00:16:29.069 Get Features (0Ah): Supported 00:16:29.069 Asynchronous Event Request (0Ch): Supported 00:16:29.069 Keep Alive (18h): Supported 00:16:29.069 I/O Commands 00:16:29.069 ------------ 00:16:29.069 Flush (00h): Supported LBA-Change 00:16:29.069 Write (01h): Supported LBA-Change 00:16:29.069 Read (02h): Supported 00:16:29.069 Compare (05h): Supported 00:16:29.069 Write Zeroes (08h): Supported LBA-Change 00:16:29.069 Dataset Management (09h): Supported LBA-Change 00:16:29.069 Copy (19h): Supported LBA-Change 00:16:29.069 00:16:29.069 Error Log 00:16:29.069 ========= 00:16:29.069 00:16:29.069 Arbitration 00:16:29.069 =========== 00:16:29.069 Arbitration Burst: 1 00:16:29.069 00:16:29.069 Power Management 00:16:29.069 ================ 00:16:29.069 Number of Power States: 1 00:16:29.069 Current Power State: Power State #0 00:16:29.069 Power State #0: 00:16:29.069 Max Power: 0.00 W 00:16:29.069 Non-Operational State: Operational 00:16:29.069 Entry Latency: Not Reported 00:16:29.069 Exit Latency: Not Reported 00:16:29.069 Relative Read Throughput: 0 00:16:29.069 Relative Read Latency: 0 00:16:29.069 Relative Write Throughput: 0 00:16:29.069 Relative Write Latency: 0 00:16:29.069 Idle Power: Not Reported 00:16:29.069 Active Power: Not Reported 00:16:29.069 Non-Operational Permissive Mode: Not Supported 00:16:29.069 00:16:29.069 Health Information 00:16:29.069 ================== 00:16:29.069 Critical Warnings: 00:16:29.069 Available Spare Space: OK 00:16:29.069 Temperature: OK 00:16:29.069 Device Reliability: OK 00:16:29.069 Read Only: No 00:16:29.069 Volatile Memory Backup: OK 00:16:29.069 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:29.069 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:29.069 Available Spare: 0% 00:16:29.069 Available Sp[2024-07-27 02:15:57.118185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:29.069 [2024-07-27 02:15:57.118202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:29.069 [2024-07-27 02:15:57.118242] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:29.069 [2024-07-27 02:15:57.118259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.069 [2024-07-27 02:15:57.118270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.069 [2024-07-27 02:15:57.118280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.069 [2024-07-27 02:15:57.118290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.069 [2024-07-27 02:15:57.122070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:29.069 [2024-07-27 02:15:57.122096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:29.069 [2024-07-27 02:15:57.122793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.069 [2024-07-27 02:15:57.122866] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:29.069 [2024-07-27 02:15:57.122880] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:29.069 [2024-07-27 02:15:57.123802] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:29.069 [2024-07-27 02:15:57.123824] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:29.069 [2024-07-27 02:15:57.123877] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:29.069 [2024-07-27 02:15:57.125839] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:29.069 are Threshold: 0% 00:16:29.069 Life Percentage Used: 0% 00:16:29.069 Data Units Read: 0 00:16:29.069 Data Units Written: 0 00:16:29.069 Host Read Commands: 0 00:16:29.069 Host Write Commands: 0 00:16:29.069 Controller Busy Time: 0 minutes 00:16:29.069 Power Cycles: 0 00:16:29.069 Power On Hours: 0 hours 00:16:29.069 Unsafe Shutdowns: 0 00:16:29.069 Unrecoverable Media Errors: 0 00:16:29.069 Lifetime Error Log Entries: 0 00:16:29.069 Warning Temperature Time: 0 minutes 00:16:29.069 Critical Temperature Time: 0 minutes 00:16:29.069 00:16:29.069 Number of Queues 00:16:29.069 ================ 00:16:29.069 Number of I/O Submission Queues: 127 00:16:29.069 Number of I/O Completion Queues: 127 00:16:29.069 00:16:29.069 Active Namespaces 00:16:29.069 ================= 00:16:29.069 Namespace ID:1 00:16:29.069 Error Recovery Timeout: Unlimited 00:16:29.069 Command Set Identifier: NVM (00h) 00:16:29.069 Deallocate: Supported 00:16:29.069 Deallocated/Unwritten Error: Not Supported 00:16:29.069 Deallocated Read Value: Unknown 00:16:29.069 Deallocate in Write Zeroes: Not Supported 00:16:29.069 Deallocated Guard Field: 0xFFFF 00:16:29.069 Flush: Supported 00:16:29.069 Reservation: Supported 00:16:29.069 Namespace Sharing Capabilities: Multiple Controllers 00:16:29.069 Size (in LBAs): 131072 (0GiB) 00:16:29.069 Capacity (in LBAs): 131072 (0GiB) 00:16:29.069 Utilization (in LBAs): 131072 (0GiB) 00:16:29.069 NGUID: 5E6FD7D834914924A66B6966BA45E2E8 00:16:29.069 UUID: 5e6fd7d8-3491-4924-a66b-6966ba45e2e8 00:16:29.069 Thin Provisioning: Not Supported 00:16:29.069 Per-NS Atomic Units: Yes 00:16:29.069 Atomic Boundary Size (Normal): 0 00:16:29.069 Atomic Boundary Size (PFail): 0 00:16:29.069 Atomic Boundary Offset: 0 00:16:29.069 Maximum Single Source Range Length: 65535 00:16:29.069 Maximum Copy Length: 65535 00:16:29.069 Maximum Source Range Count: 1 00:16:29.069 NGUID/EUI64 Never Reused: No 00:16:29.069 Namespace Write Protected: No 00:16:29.069 Number of LBA Formats: 1 00:16:29.069 Current LBA Format: LBA Format #00 00:16:29.069 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:29.069 00:16:29.069 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:29.069 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.327 [2024-07-27 02:15:57.355896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.591 Initializing NVMe Controllers 00:16:34.591 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:34.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:34.591 Initialization complete. Launching workers. 00:16:34.591 ======================================================== 00:16:34.591 Latency(us) 00:16:34.591 Device Information : IOPS MiB/s Average min max 00:16:34.591 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32983.53 128.84 3880.20 1186.17 7628.11 00:16:34.591 ======================================================== 00:16:34.591 Total : 32983.53 128.84 3880.20 1186.17 7628.11 00:16:34.591 00:16:34.591 [2024-07-27 02:16:02.382945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.591 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:34.591 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.591 [2024-07-27 02:16:02.624148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:39.879 Initializing NVMe Controllers 00:16:39.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:39.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:39.879 Initialization complete. Launching workers. 00:16:39.879 ======================================================== 00:16:39.879 Latency(us) 00:16:39.880 Device Information : IOPS MiB/s Average min max 00:16:39.880 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16005.40 62.52 8007.22 5985.19 15794.86 00:16:39.880 ======================================================== 00:16:39.880 Total : 16005.40 62.52 8007.22 5985.19 15794.86 00:16:39.880 00:16:39.880 [2024-07-27 02:16:07.661671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:39.880 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:39.880 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.880 [2024-07-27 02:16:07.872707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:45.139 [2024-07-27 02:16:12.947500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:45.139 Initializing NVMe Controllers 00:16:45.139 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:45.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:45.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:45.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:45.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:45.139 Initialization complete. Launching workers. 00:16:45.139 Starting thread on core 2 00:16:45.139 Starting thread on core 3 00:16:45.139 Starting thread on core 1 00:16:45.139 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:45.139 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.139 [2024-07-27 02:16:13.260551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.436 [2024-07-27 02:16:16.316573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.436 Initializing NVMe Controllers 00:16:48.436 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.436 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.436 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:48.436 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:48.436 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:48.436 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:48.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:48.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:48.436 Initialization complete. Launching workers. 00:16:48.436 Starting thread on core 1 with urgent priority queue 00:16:48.436 Starting thread on core 2 with urgent priority queue 00:16:48.436 Starting thread on core 3 with urgent priority queue 00:16:48.436 Starting thread on core 0 with urgent priority queue 00:16:48.436 SPDK bdev Controller (SPDK1 ) core 0: 5326.67 IO/s 18.77 secs/100000 ios 00:16:48.436 SPDK bdev Controller (SPDK1 ) core 1: 5725.00 IO/s 17.47 secs/100000 ios 00:16:48.436 SPDK bdev Controller (SPDK1 ) core 2: 5658.67 IO/s 17.67 secs/100000 ios 00:16:48.436 SPDK bdev Controller (SPDK1 ) core 3: 5547.00 IO/s 18.03 secs/100000 ios 00:16:48.436 ======================================================== 00:16:48.436 00:16:48.436 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:48.436 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.693 [2024-07-27 02:16:16.607576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.693 Initializing NVMe Controllers 00:16:48.693 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.693 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.693 Namespace ID: 1 size: 0GB 00:16:48.693 Initialization complete. 00:16:48.693 INFO: using host memory buffer for IO 00:16:48.693 Hello world! 00:16:48.693 [2024-07-27 02:16:16.641122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.693 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:48.693 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.950 [2024-07-27 02:16:16.922542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.882 Initializing NVMe Controllers 00:16:49.882 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:49.882 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:49.882 Initialization complete. Launching workers. 00:16:49.882 submit (in ns) avg, min, max = 8512.9, 3558.9, 5995071.1 00:16:49.882 complete (in ns) avg, min, max = 25380.4, 2070.0, 4015648.9 00:16:49.882 00:16:49.882 Submit histogram 00:16:49.882 ================ 00:16:49.882 Range in us Cumulative Count 00:16:49.882 3.556 - 3.579: 0.3658% ( 47) 00:16:49.882 3.579 - 3.603: 1.3074% ( 121) 00:16:49.882 3.603 - 3.627: 3.8988% ( 333) 00:16:49.882 3.627 - 3.650: 9.5253% ( 723) 00:16:49.882 3.650 - 3.674: 17.6031% ( 1038) 00:16:49.882 3.674 - 3.698: 26.2490% ( 1111) 00:16:49.882 3.698 - 3.721: 34.8093% ( 1100) 00:16:49.882 3.721 - 3.745: 42.3658% ( 971) 00:16:49.882 3.745 - 3.769: 48.6226% ( 804) 00:16:49.882 3.769 - 3.793: 53.9300% ( 682) 00:16:49.882 3.793 - 3.816: 58.1946% ( 548) 00:16:49.882 3.816 - 3.840: 61.6965% ( 450) 00:16:49.882 3.840 - 3.864: 64.9105% ( 413) 00:16:49.882 3.864 - 3.887: 68.1479% ( 416) 00:16:49.882 3.887 - 3.911: 71.8833% ( 480) 00:16:49.882 3.911 - 3.935: 76.6304% ( 610) 00:16:49.882 3.935 - 3.959: 80.9650% ( 557) 00:16:49.882 3.959 - 3.982: 84.1167% ( 405) 00:16:49.882 3.982 - 4.006: 86.7004% ( 332) 00:16:49.882 4.006 - 4.030: 88.6459% ( 250) 00:16:49.882 4.030 - 4.053: 90.1401% ( 192) 00:16:49.882 4.053 - 4.077: 91.3696% ( 158) 00:16:49.882 4.077 - 4.101: 92.5292% ( 149) 00:16:49.882 4.101 - 4.124: 93.4086% ( 113) 00:16:49.882 4.124 - 4.148: 94.3580% ( 122) 00:16:49.882 4.148 - 4.172: 95.0506% ( 89) 00:16:49.882 4.172 - 4.196: 95.5953% ( 70) 00:16:49.882 4.196 - 4.219: 95.9844% ( 50) 00:16:49.882 4.219 - 4.243: 96.2879% ( 39) 00:16:49.882 4.243 - 4.267: 96.5603% ( 35) 00:16:49.882 4.267 - 4.290: 96.7549% ( 25) 00:16:49.882 4.290 - 4.314: 96.8560% ( 13) 00:16:49.882 4.314 - 4.338: 96.9728% ( 15) 00:16:49.882 4.338 - 4.361: 97.0739% ( 13) 00:16:49.882 4.361 - 4.385: 97.1984% ( 16) 00:16:49.882 4.385 - 4.409: 97.2374% ( 5) 00:16:49.883 4.409 - 4.433: 97.2918% ( 7) 00:16:49.883 4.433 - 4.456: 97.3385% ( 6) 00:16:49.883 4.456 - 4.480: 97.3619% ( 3) 00:16:49.883 4.480 - 4.504: 97.3774% ( 2) 00:16:49.883 4.504 - 4.527: 97.4397% ( 8) 00:16:49.883 4.527 - 4.551: 97.4630% ( 3) 00:16:49.883 4.551 - 4.575: 97.4864% ( 3) 00:16:49.883 4.575 - 4.599: 97.5175% ( 4) 00:16:49.883 4.599 - 4.622: 97.5253% ( 1) 00:16:49.883 4.622 - 4.646: 97.5486% ( 3) 00:16:49.883 4.646 - 4.670: 97.5642% ( 2) 00:16:49.883 4.670 - 4.693: 97.5798% ( 2) 00:16:49.883 4.693 - 4.717: 97.6187% ( 5) 00:16:49.883 4.717 - 4.741: 97.6965% ( 10) 00:16:49.883 4.741 - 4.764: 97.7665% ( 9) 00:16:49.883 4.764 - 4.788: 97.8288% ( 8) 00:16:49.883 4.788 - 4.812: 97.8755% ( 6) 00:16:49.883 4.812 - 4.836: 97.9377% ( 8) 00:16:49.883 4.836 - 4.859: 97.9689% ( 4) 00:16:49.883 4.859 - 4.883: 97.9922% ( 3) 00:16:49.883 4.883 - 4.907: 98.0233% ( 4) 00:16:49.883 4.907 - 4.930: 98.0467% ( 3) 00:16:49.883 4.930 - 4.954: 98.0856% ( 5) 00:16:49.883 4.954 - 4.978: 98.1245% ( 5) 00:16:49.883 4.978 - 5.001: 98.1323% ( 1) 00:16:49.883 5.001 - 5.025: 98.1479% ( 2) 00:16:49.883 5.025 - 5.049: 98.1712% ( 3) 00:16:49.883 5.049 - 5.073: 98.1790% ( 1) 00:16:49.883 5.073 - 5.096: 98.1946% ( 2) 00:16:49.883 5.120 - 5.144: 98.2023% ( 1) 00:16:49.883 5.144 - 5.167: 98.2101% ( 1) 00:16:49.883 5.167 - 5.191: 98.2257% ( 2) 00:16:49.883 5.191 - 5.215: 98.2335% ( 1) 00:16:49.883 5.239 - 5.262: 98.2412% ( 1) 00:16:49.883 5.262 - 5.286: 98.2490% ( 1) 00:16:49.883 5.286 - 5.310: 98.2568% ( 1) 00:16:49.883 5.404 - 5.428: 98.2724% ( 2) 00:16:49.883 5.428 - 5.452: 98.2802% ( 1) 00:16:49.883 5.452 - 5.476: 98.2879% ( 1) 00:16:49.883 5.997 - 6.021: 98.2957% ( 1) 00:16:49.883 6.044 - 6.068: 98.3035% ( 1) 00:16:49.883 6.353 - 6.400: 98.3113% ( 1) 00:16:49.883 6.447 - 6.495: 98.3191% ( 1) 00:16:49.883 6.495 - 6.542: 98.3346% ( 2) 00:16:49.883 6.590 - 6.637: 98.3424% ( 1) 00:16:49.883 6.637 - 6.684: 98.3502% ( 1) 00:16:49.883 6.732 - 6.779: 98.3580% ( 1) 00:16:49.883 7.016 - 7.064: 98.3735% ( 2) 00:16:49.883 7.064 - 7.111: 98.3813% ( 1) 00:16:49.883 7.111 - 7.159: 98.3891% ( 1) 00:16:49.883 7.253 - 7.301: 98.4047% ( 2) 00:16:49.883 7.301 - 7.348: 98.4125% ( 1) 00:16:49.883 7.396 - 7.443: 98.4202% ( 1) 00:16:49.883 7.538 - 7.585: 98.4436% ( 3) 00:16:49.883 7.633 - 7.680: 98.4514% ( 1) 00:16:49.883 7.822 - 7.870: 98.4669% ( 2) 00:16:49.883 7.870 - 7.917: 98.4747% ( 1) 00:16:49.883 7.917 - 7.964: 98.4825% ( 1) 00:16:49.883 7.964 - 8.012: 98.4903% ( 1) 00:16:49.883 8.012 - 8.059: 98.5058% ( 2) 00:16:49.883 8.201 - 8.249: 98.5136% ( 1) 00:16:49.883 8.296 - 8.344: 98.5292% ( 2) 00:16:49.883 8.439 - 8.486: 98.5370% ( 1) 00:16:49.883 8.486 - 8.533: 98.5447% ( 1) 00:16:49.883 8.533 - 8.581: 98.5603% ( 2) 00:16:49.883 8.818 - 8.865: 98.5759% ( 2) 00:16:49.883 8.865 - 8.913: 98.5914% ( 2) 00:16:49.883 9.007 - 9.055: 98.6070% ( 2) 00:16:49.883 9.055 - 9.102: 98.6148% ( 1) 00:16:49.883 9.292 - 9.339: 98.6304% ( 2) 00:16:49.883 9.339 - 9.387: 98.6381% ( 1) 00:16:49.883 9.529 - 9.576: 98.6615% ( 3) 00:16:49.883 9.624 - 9.671: 98.6693% ( 1) 00:16:49.883 9.719 - 9.766: 98.6770% ( 1) 00:16:49.883 9.766 - 9.813: 98.6848% ( 1) 00:16:49.883 10.145 - 10.193: 98.6926% ( 1) 00:16:49.883 10.240 - 10.287: 98.7160% ( 3) 00:16:49.883 10.287 - 10.335: 98.7237% ( 1) 00:16:49.883 10.335 - 10.382: 98.7315% ( 1) 00:16:49.883 10.382 - 10.430: 98.7393% ( 1) 00:16:49.883 10.477 - 10.524: 98.7471% ( 1) 00:16:49.883 10.524 - 10.572: 98.7549% ( 1) 00:16:49.883 10.572 - 10.619: 98.7704% ( 2) 00:16:49.883 10.714 - 10.761: 98.7782% ( 1) 00:16:49.883 10.761 - 10.809: 98.7860% ( 1) 00:16:49.883 10.856 - 10.904: 98.7938% ( 1) 00:16:49.883 11.188 - 11.236: 98.8016% ( 1) 00:16:49.883 11.283 - 11.330: 98.8171% ( 2) 00:16:49.883 11.615 - 11.662: 98.8249% ( 1) 00:16:49.883 11.804 - 11.852: 98.8327% ( 1) 00:16:49.883 11.852 - 11.899: 98.8482% ( 2) 00:16:49.883 12.041 - 12.089: 98.8560% ( 1) 00:16:49.883 12.326 - 12.421: 98.8638% ( 1) 00:16:49.883 12.421 - 12.516: 98.8716% ( 1) 00:16:49.883 12.705 - 12.800: 98.8794% ( 1) 00:16:49.883 12.800 - 12.895: 98.8872% ( 1) 00:16:49.883 13.274 - 13.369: 98.8949% ( 1) 00:16:49.883 13.464 - 13.559: 98.9105% ( 2) 00:16:49.883 13.748 - 13.843: 98.9261% ( 2) 00:16:49.883 14.507 - 14.601: 98.9339% ( 1) 00:16:49.883 14.791 - 14.886: 98.9416% ( 1) 00:16:49.883 15.170 - 15.265: 98.9494% ( 1) 00:16:49.883 17.067 - 17.161: 98.9572% ( 1) 00:16:49.883 17.161 - 17.256: 98.9650% ( 1) 00:16:49.883 17.351 - 17.446: 98.9728% ( 1) 00:16:49.883 17.446 - 17.541: 99.0117% ( 5) 00:16:49.883 17.541 - 17.636: 99.0350% ( 3) 00:16:49.883 17.636 - 17.730: 99.0584% ( 3) 00:16:49.883 17.730 - 17.825: 99.0895% ( 4) 00:16:49.883 17.825 - 17.920: 99.1284% ( 5) 00:16:49.883 17.920 - 18.015: 99.1595% ( 4) 00:16:49.883 18.015 - 18.110: 99.1984% ( 5) 00:16:49.883 18.110 - 18.204: 99.2996% ( 13) 00:16:49.883 18.204 - 18.299: 99.3541% ( 7) 00:16:49.883 18.299 - 18.394: 99.4475% ( 12) 00:16:49.883 18.394 - 18.489: 99.5331% ( 11) 00:16:49.883 18.489 - 18.584: 99.5953% ( 8) 00:16:49.883 18.584 - 18.679: 99.6732% ( 10) 00:16:49.883 18.679 - 18.773: 99.7043% ( 4) 00:16:49.883 18.773 - 18.868: 99.7354% ( 4) 00:16:49.883 18.868 - 18.963: 99.7588% ( 3) 00:16:49.883 19.058 - 19.153: 99.7821% ( 3) 00:16:49.883 19.153 - 19.247: 99.7899% ( 1) 00:16:49.883 19.342 - 19.437: 99.8054% ( 2) 00:16:49.883 19.532 - 19.627: 99.8210% ( 2) 00:16:49.883 19.627 - 19.721: 99.8288% ( 1) 00:16:49.883 20.575 - 20.670: 99.8366% ( 1) 00:16:49.883 21.997 - 22.092: 99.8444% ( 1) 00:16:49.883 22.471 - 22.566: 99.8521% ( 1) 00:16:49.883 24.178 - 24.273: 99.8599% ( 1) 00:16:49.883 24.273 - 24.462: 99.8677% ( 1) 00:16:49.883 25.221 - 25.410: 99.8755% ( 1) 00:16:49.883 26.927 - 27.117: 99.8833% ( 1) 00:16:49.883 29.203 - 29.393: 99.8911% ( 1) 00:16:49.883 3980.705 - 4004.978: 99.9611% ( 9) 00:16:49.883 4004.978 - 4029.250: 99.9922% ( 4) 00:16:49.883 5971.058 - 5995.330: 100.0000% ( 1) 00:16:49.883 00:16:49.883 Complete histogram 00:16:49.883 ================== 00:16:49.883 Range in us Cumulative Count 00:16:49.883 2.062 - 2.074: 0.3735% ( 48) 00:16:49.883 2.074 - 2.086: 25.3307% ( 3207) 00:16:49.883 2.086 - 2.098: 47.0817% ( 2795) 00:16:49.883 2.098 - 2.110: 49.5253% ( 314) 00:16:49.883 2.110 - 2.121: 57.4475% ( 1018) 00:16:49.883 2.121 - 2.133: 61.2218% ( 485) 00:16:49.883 2.133 - 2.145: 63.8132% ( 333) 00:16:49.883 2.145 - 2.157: 72.3813% ( 1101) 00:16:49.883 2.157 - 2.169: 76.0078% ( 466) 00:16:49.883 2.169 - 2.181: 77.1206% ( 143) 00:16:49.883 2.181 - 2.193: 79.8833% ( 355) 00:16:49.883 2.193 - 2.204: 81.2451% ( 175) 00:16:49.883 2.204 - 2.216: 81.9922% ( 96) 00:16:49.883 2.216 - 2.228: 86.5759% ( 589) 00:16:49.883 2.228 - 2.240: 90.1946% ( 465) 00:16:49.883 2.240 - 2.252: 91.3930% ( 154) 00:16:49.883 2.252 - 2.264: 92.9650% ( 202) 00:16:49.883 2.264 - 2.276: 93.6576% ( 89) 00:16:49.883 2.276 - 2.287: 93.9533% ( 38) 00:16:49.883 2.287 - 2.299: 94.3580% ( 52) 00:16:49.883 2.299 - 2.311: 94.9183% ( 72) 00:16:49.883 2.311 - 2.323: 95.2374% ( 41) 00:16:49.883 2.323 - 2.335: 95.3307% ( 12) 00:16:49.883 2.335 - 2.347: 95.3852% ( 7) 00:16:49.883 2.347 - 2.359: 95.4786% ( 12) 00:16:49.883 2.359 - 2.370: 95.6420% ( 21) 00:16:49.883 2.370 - 2.382: 95.9377% ( 38) 00:16:49.883 2.382 - 2.394: 96.5136% ( 74) 00:16:49.883 2.394 - 2.406: 97.0661% ( 71) 00:16:49.883 2.406 - 2.418: 97.2918% ( 29) 00:16:49.884 2.418 - 2.430: 97.4163% ( 16) 00:16:49.884 2.430 - 2.441: 97.6498% ( 30) 00:16:49.884 2.441 - 2.453: 97.7743% ( 16) 00:16:49.884 2.453 - 2.465: 97.9144% ( 18) 00:16:49.884 2.465 - 2.477: 98.0156% ( 13) 00:16:49.884 2.477 - 2.489: 98.0700% ( 7) 00:16:49.884 2.489 - 2.501: 98.1556% ( 11) 00:16:49.884 2.501 - 2.513: 98.2101% ( 7) 00:16:49.884 2.513 - 2.524: 98.2724% ( 8) 00:16:49.884 2.524 - 2.536: 98.2957% ( 3) 00:16:49.884 2.536 - 2.548: 98.3191% ( 3) 00:16:49.884 2.548 - 2.560: 98.3268% ( 1) 00:16:49.884 2.560 - 2.572: 98.3424% ( 2) 00:16:49.884 2.572 - 2.584: 98.3502% ( 1) 00:16:49.884 2.584 - 2.596: 98.3580% ( 1) 00:16:49.884 2.619 - 2.631: 98.3658% ( 1) 00:16:49.884 2.643 - 2.655: 98.3735% ( 1) 00:16:49.884 2.667 - 2.679: 98.3813% ( 1) 00:16:49.884 2.690 - 2.702: 98.3891% ( 1) 00:16:49.884 2.714 - 2.726: 98.3969% ( 1) 00:16:49.884 2.797 - 2.809: 98.4047% ( 1) 00:16:49.884 2.904 - 2.916: 98.4125% ( 1) 00:16:49.884 2.951 - 2.963: 98.4202% ( 1) 00:16:49.884 3.247 - 3.271: 98.4436% ( 3) 00:16:49.884 3.271 - 3.295: 98.4514% ( 1) 00:16:49.884 3.366 - 3.390: 98.4591% ( 1) 00:16:49.884 3.390 - 3.413: 98.4669% ( 1) 00:16:49.884 3.413 - 3.437: 98.4747% ( 1) 00:16:49.884 3.437 - 3.461: 98.4825% ( 1) 00:16:49.884 3.461 - 3.484: 98.4903% ( 1) 00:16:49.884 3.508 - 3.532: 98.5136% ( 3) 00:16:49.884 3.579 - 3.603: 98.5214% ( 1) 00:16:49.884 3.603 - 3.627: 98.5292% ( 1) 00:16:49.884 3.627 - 3.650: 98.5370% ( 1) 00:16:49.884 3.650 - 3.674: 98.5447% ( 1) 00:16:49.884 3.769 - 3.793: 98.5603% ( 2) 00:16:49.884 3.793 - 3.816: 98.5681% ( 1) 00:16:49.884 3.816 - 3.840: 98.5759% ( 1) 00:16:49.884 3.864 - 3.887: 98.5837% ( 1) 00:16:49.884 3.887 - 3.911: 98.5914% ( 1) 00:16:49.884 3.935 - 3.959: 98.5992% ( 1) 00:16:49.884 3.982 - 4.006: 98.6070% ( 1) 00:16:49.884 4.101 - 4.124: 98.6148% ( 1) 00:16:49.884 4.124 - 4.148: 98.6304% ( 2) 00:16:49.884 5.452 - 5.476: 98.6381% ( 1) 00:16:49.884 5.594 - 5.618: 98.6459% ( 1) 00:16:49.884 5.665 - 5.689: 98.6537% ( 1) 00:16:49.884 5.926 - 5.950: 98.6615% ( 1) 00:16:49.884 6.163 - 6.210: 9[2024-07-27 02:16:17.943720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.884 8.6693% ( 1) 00:16:49.884 6.353 - 6.400: 98.6770% ( 1) 00:16:49.884 6.732 - 6.779: 98.6848% ( 1) 00:16:49.884 6.921 - 6.969: 98.7004% ( 2) 00:16:49.884 7.016 - 7.064: 98.7082% ( 1) 00:16:49.884 7.111 - 7.159: 98.7160% ( 1) 00:16:49.884 7.301 - 7.348: 98.7237% ( 1) 00:16:49.884 7.396 - 7.443: 98.7315% ( 1) 00:16:49.884 7.443 - 7.490: 98.7393% ( 1) 00:16:49.884 7.538 - 7.585: 98.7471% ( 1) 00:16:49.884 7.680 - 7.727: 98.7626% ( 2) 00:16:49.884 8.107 - 8.154: 98.7782% ( 2) 00:16:49.884 8.154 - 8.201: 98.7860% ( 1) 00:16:49.884 8.249 - 8.296: 98.7938% ( 1) 00:16:49.884 8.344 - 8.391: 98.8016% ( 1) 00:16:49.884 8.391 - 8.439: 98.8093% ( 1) 00:16:49.884 8.913 - 8.960: 98.8171% ( 1) 00:16:49.884 15.170 - 15.265: 98.8249% ( 1) 00:16:49.884 15.360 - 15.455: 98.8327% ( 1) 00:16:49.884 15.644 - 15.739: 98.8482% ( 2) 00:16:49.884 15.739 - 15.834: 98.8560% ( 1) 00:16:49.884 15.834 - 15.929: 98.8716% ( 2) 00:16:49.884 15.929 - 16.024: 98.8949% ( 3) 00:16:49.884 16.024 - 16.119: 98.9027% ( 1) 00:16:49.884 16.119 - 16.213: 98.9339% ( 4) 00:16:49.884 16.213 - 16.308: 98.9494% ( 2) 00:16:49.884 16.308 - 16.403: 98.9572% ( 1) 00:16:49.884 16.403 - 16.498: 99.0117% ( 7) 00:16:49.884 16.498 - 16.593: 99.0895% ( 10) 00:16:49.884 16.593 - 16.687: 99.1362% ( 6) 00:16:49.884 16.687 - 16.782: 99.1751% ( 5) 00:16:49.884 16.782 - 16.877: 99.2140% ( 5) 00:16:49.884 16.877 - 16.972: 99.2451% ( 4) 00:16:49.884 16.972 - 17.067: 99.2529% ( 1) 00:16:49.884 17.067 - 17.161: 99.2763% ( 3) 00:16:49.884 17.161 - 17.256: 99.2996% ( 3) 00:16:49.884 17.351 - 17.446: 99.3152% ( 2) 00:16:49.884 17.446 - 17.541: 99.3230% ( 1) 00:16:49.884 17.541 - 17.636: 99.3463% ( 3) 00:16:49.884 17.636 - 17.730: 99.3619% ( 2) 00:16:49.884 18.015 - 18.110: 99.3696% ( 1) 00:16:49.884 18.110 - 18.204: 99.3774% ( 1) 00:16:49.884 18.394 - 18.489: 99.3930% ( 2) 00:16:49.884 18.489 - 18.584: 99.4008% ( 1) 00:16:49.884 18.679 - 18.773: 99.4086% ( 1) 00:16:49.884 20.859 - 20.954: 99.4163% ( 1) 00:16:49.884 2038.898 - 2051.034: 99.4241% ( 1) 00:16:49.884 3203.982 - 3228.255: 99.4319% ( 1) 00:16:49.884 3980.705 - 4004.978: 99.8288% ( 51) 00:16:49.884 4004.978 - 4029.250: 100.0000% ( 22) 00:16:49.884 00:16:49.884 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:49.884 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:49.884 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:49.884 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:49.884 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:50.142 [ 00:16:50.142 { 00:16:50.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:50.142 "subtype": "Discovery", 00:16:50.142 "listen_addresses": [], 00:16:50.142 "allow_any_host": true, 00:16:50.142 "hosts": [] 00:16:50.142 }, 00:16:50.142 { 00:16:50.142 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:50.142 "subtype": "NVMe", 00:16:50.142 "listen_addresses": [ 00:16:50.142 { 00:16:50.142 "trtype": "VFIOUSER", 00:16:50.142 "adrfam": "IPv4", 00:16:50.142 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:50.142 "trsvcid": "0" 00:16:50.142 } 00:16:50.142 ], 00:16:50.142 "allow_any_host": true, 00:16:50.142 "hosts": [], 00:16:50.142 "serial_number": "SPDK1", 00:16:50.142 "model_number": "SPDK bdev Controller", 00:16:50.142 "max_namespaces": 32, 00:16:50.142 "min_cntlid": 1, 00:16:50.142 "max_cntlid": 65519, 00:16:50.142 "namespaces": [ 00:16:50.142 { 00:16:50.142 "nsid": 1, 00:16:50.142 "bdev_name": "Malloc1", 00:16:50.142 "name": "Malloc1", 00:16:50.142 "nguid": "5E6FD7D834914924A66B6966BA45E2E8", 00:16:50.142 "uuid": "5e6fd7d8-3491-4924-a66b-6966ba45e2e8" 00:16:50.142 } 00:16:50.142 ] 00:16:50.142 }, 00:16:50.142 { 00:16:50.142 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:50.142 "subtype": "NVMe", 00:16:50.142 "listen_addresses": [ 00:16:50.142 { 00:16:50.142 "trtype": "VFIOUSER", 00:16:50.142 "adrfam": "IPv4", 00:16:50.142 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:50.142 "trsvcid": "0" 00:16:50.142 } 00:16:50.142 ], 00:16:50.142 "allow_any_host": true, 00:16:50.142 "hosts": [], 00:16:50.142 "serial_number": "SPDK2", 00:16:50.142 "model_number": "SPDK bdev Controller", 00:16:50.142 "max_namespaces": 32, 00:16:50.142 "min_cntlid": 1, 00:16:50.142 "max_cntlid": 65519, 00:16:50.142 "namespaces": [ 00:16:50.142 { 00:16:50.142 "nsid": 1, 00:16:50.142 "bdev_name": "Malloc2", 00:16:50.142 "name": "Malloc2", 00:16:50.142 "nguid": "AB68D9FC47EC4849BBACF5F872CB1169", 00:16:50.142 "uuid": "ab68d9fc-47ec-4849-bbac-f5f872cb1169" 00:16:50.142 } 00:16:50.142 ] 00:16:50.143 } 00:16:50.143 ] 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1026341 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:50.143 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:50.401 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.401 [2024-07-27 02:16:18.415579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:50.401 Malloc3 00:16:50.401 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:50.659 [2024-07-27 02:16:18.768033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:50.659 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:50.659 Asynchronous Event Request test 00:16:50.659 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:50.659 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:50.659 Registering asynchronous event callbacks... 00:16:50.659 Starting namespace attribute notice tests for all controllers... 00:16:50.659 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:50.659 aer_cb - Changed Namespace 00:16:50.659 Cleaning up... 00:16:50.916 [ 00:16:50.916 { 00:16:50.916 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:50.916 "subtype": "Discovery", 00:16:50.916 "listen_addresses": [], 00:16:50.916 "allow_any_host": true, 00:16:50.916 "hosts": [] 00:16:50.916 }, 00:16:50.916 { 00:16:50.916 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:50.916 "subtype": "NVMe", 00:16:50.916 "listen_addresses": [ 00:16:50.916 { 00:16:50.916 "trtype": "VFIOUSER", 00:16:50.916 "adrfam": "IPv4", 00:16:50.916 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:50.916 "trsvcid": "0" 00:16:50.916 } 00:16:50.916 ], 00:16:50.916 "allow_any_host": true, 00:16:50.916 "hosts": [], 00:16:50.916 "serial_number": "SPDK1", 00:16:50.916 "model_number": "SPDK bdev Controller", 00:16:50.916 "max_namespaces": 32, 00:16:50.916 "min_cntlid": 1, 00:16:50.916 "max_cntlid": 65519, 00:16:50.916 "namespaces": [ 00:16:50.916 { 00:16:50.916 "nsid": 1, 00:16:50.916 "bdev_name": "Malloc1", 00:16:50.916 "name": "Malloc1", 00:16:50.916 "nguid": "5E6FD7D834914924A66B6966BA45E2E8", 00:16:50.916 "uuid": "5e6fd7d8-3491-4924-a66b-6966ba45e2e8" 00:16:50.916 }, 00:16:50.916 { 00:16:50.916 "nsid": 2, 00:16:50.916 "bdev_name": "Malloc3", 00:16:50.916 "name": "Malloc3", 00:16:50.916 "nguid": "AFD17D88131B4C03BE1CCF49CCCDC145", 00:16:50.916 "uuid": "afd17d88-131b-4c03-be1c-cf49cccdc145" 00:16:50.916 } 00:16:50.916 ] 00:16:50.916 }, 00:16:50.916 { 00:16:50.916 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:50.916 "subtype": "NVMe", 00:16:50.916 "listen_addresses": [ 00:16:50.916 { 00:16:50.916 "trtype": "VFIOUSER", 00:16:50.916 "adrfam": "IPv4", 00:16:50.916 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:50.916 "trsvcid": "0" 00:16:50.916 } 00:16:50.916 ], 00:16:50.916 "allow_any_host": true, 00:16:50.916 "hosts": [], 00:16:50.916 "serial_number": "SPDK2", 00:16:50.916 "model_number": "SPDK bdev Controller", 00:16:50.916 "max_namespaces": 32, 00:16:50.916 "min_cntlid": 1, 00:16:50.916 "max_cntlid": 65519, 00:16:50.916 "namespaces": [ 00:16:50.916 { 00:16:50.916 "nsid": 1, 00:16:50.916 "bdev_name": "Malloc2", 00:16:50.916 "name": "Malloc2", 00:16:50.916 "nguid": "AB68D9FC47EC4849BBACF5F872CB1169", 00:16:50.916 "uuid": "ab68d9fc-47ec-4849-bbac-f5f872cb1169" 00:16:50.916 } 00:16:50.916 ] 00:16:50.916 } 00:16:50.916 ] 00:16:50.916 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1026341 00:16:50.916 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:50.916 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:50.916 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:50.916 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:50.916 [2024-07-27 02:16:19.041805] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:16:50.916 [2024-07-27 02:16:19.041856] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026349 ] 00:16:50.916 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.916 [2024-07-27 02:16:19.059629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.176 [2024-07-27 02:16:19.077378] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:51.176 [2024-07-27 02:16:19.085408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:51.176 [2024-07-27 02:16:19.085441] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb14197a000 00:16:51.176 [2024-07-27 02:16:19.086404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.087405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.088411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.089420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.090442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.091442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.092446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.093453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:51.176 [2024-07-27 02:16:19.094465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:51.176 [2024-07-27 02:16:19.094487] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb14073c000 00:16:51.176 [2024-07-27 02:16:19.095603] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:51.176 [2024-07-27 02:16:19.107741] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:51.176 [2024-07-27 02:16:19.107773] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:51.176 [2024-07-27 02:16:19.116902] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:51.176 [2024-07-27 02:16:19.116955] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:51.176 [2024-07-27 02:16:19.117041] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:51.176 [2024-07-27 02:16:19.117083] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:51.176 [2024-07-27 02:16:19.117096] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:51.176 [2024-07-27 02:16:19.117913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:51.176 [2024-07-27 02:16:19.117936] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:51.176 [2024-07-27 02:16:19.117953] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:51.176 [2024-07-27 02:16:19.118913] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:51.176 [2024-07-27 02:16:19.118933] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:51.176 [2024-07-27 02:16:19.118946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:51.176 [2024-07-27 02:16:19.119922] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:51.176 [2024-07-27 02:16:19.119942] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:51.176 [2024-07-27 02:16:19.120927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:51.176 [2024-07-27 02:16:19.120946] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:51.176 [2024-07-27 02:16:19.120955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:51.176 [2024-07-27 02:16:19.120966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:51.176 [2024-07-27 02:16:19.121076] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:51.176 [2024-07-27 02:16:19.121086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:51.177 [2024-07-27 02:16:19.121095] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:51.177 [2024-07-27 02:16:19.121932] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:51.177 [2024-07-27 02:16:19.122938] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:51.177 [2024-07-27 02:16:19.123946] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:51.177 [2024-07-27 02:16:19.124939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.177 [2024-07-27 02:16:19.125008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:51.177 [2024-07-27 02:16:19.125957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:51.177 [2024-07-27 02:16:19.125976] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:51.177 [2024-07-27 02:16:19.125984] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.126007] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:51.177 [2024-07-27 02:16:19.126020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.126054] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:51.177 [2024-07-27 02:16:19.126076] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:51.177 [2024-07-27 02:16:19.126083] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.177 [2024-07-27 02:16:19.126101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.130077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.130099] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:51.177 [2024-07-27 02:16:19.130107] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:51.177 [2024-07-27 02:16:19.130115] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:51.177 [2024-07-27 02:16:19.130122] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:51.177 [2024-07-27 02:16:19.130130] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:51.177 [2024-07-27 02:16:19.130138] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:51.177 [2024-07-27 02:16:19.130146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.130158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.130178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.138086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.138114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.177 [2024-07-27 02:16:19.138128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.177 [2024-07-27 02:16:19.138141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.177 [2024-07-27 02:16:19.138153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.177 [2024-07-27 02:16:19.138162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.138176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.138191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.146074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.146093] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:51.177 [2024-07-27 02:16:19.146110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.146126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.146136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.146154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.154147] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.154163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.154176] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:51.177 [2024-07-27 02:16:19.154184] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:51.177 [2024-07-27 02:16:19.154191] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.177 [2024-07-27 02:16:19.154200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.162074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.162097] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:51.177 [2024-07-27 02:16:19.162116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.162130] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.162143] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:51.177 [2024-07-27 02:16:19.162151] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:51.177 [2024-07-27 02:16:19.162157] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.177 [2024-07-27 02:16:19.162167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.170072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.170100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.170116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.170129] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:51.177 [2024-07-27 02:16:19.170138] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:51.177 [2024-07-27 02:16:19.170144] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.177 [2024-07-27 02:16:19.170154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:51.177 [2024-07-27 02:16:19.178074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:51.177 [2024-07-27 02:16:19.178097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178109] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178167] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:51.177 [2024-07-27 02:16:19.178174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:51.177 [2024-07-27 02:16:19.178183] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:51.178 [2024-07-27 02:16:19.178207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.186072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.186109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.194071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.194098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.202073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.202098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.210118] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:51.178 [2024-07-27 02:16:19.210129] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:51.178 [2024-07-27 02:16:19.210135] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:51.178 [2024-07-27 02:16:19.210142] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:51.178 [2024-07-27 02:16:19.210148] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:51.178 [2024-07-27 02:16:19.210158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:51.178 [2024-07-27 02:16:19.210170] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:51.178 [2024-07-27 02:16:19.210178] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:51.178 [2024-07-27 02:16:19.210184] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.178 [2024-07-27 02:16:19.210193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.210204] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:51.178 [2024-07-27 02:16:19.210212] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:51.178 [2024-07-27 02:16:19.210221] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.178 [2024-07-27 02:16:19.210231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.210243] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:51.178 [2024-07-27 02:16:19.210251] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:51.178 [2024-07-27 02:16:19.210257] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:51.178 [2024-07-27 02:16:19.210266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:51.178 [2024-07-27 02:16:19.218087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.218115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.218132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:51.178 [2024-07-27 02:16:19.218144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:51.178 ===================================================== 00:16:51.178 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:51.178 ===================================================== 00:16:51.178 Controller Capabilities/Features 00:16:51.178 ================================ 00:16:51.178 Vendor ID: 4e58 00:16:51.178 Subsystem Vendor ID: 4e58 00:16:51.178 Serial Number: SPDK2 00:16:51.178 Model Number: SPDK bdev Controller 00:16:51.178 Firmware Version: 24.09 00:16:51.178 Recommended Arb Burst: 6 00:16:51.178 IEEE OUI Identifier: 8d 6b 50 00:16:51.178 Multi-path I/O 00:16:51.178 May have multiple subsystem ports: Yes 00:16:51.178 May have multiple controllers: Yes 00:16:51.178 Associated with SR-IOV VF: No 00:16:51.178 Max Data Transfer Size: 131072 00:16:51.178 Max Number of Namespaces: 32 00:16:51.178 Max Number of I/O Queues: 127 00:16:51.178 NVMe Specification Version (VS): 1.3 00:16:51.178 NVMe Specification Version (Identify): 1.3 00:16:51.178 Maximum Queue Entries: 256 00:16:51.178 Contiguous Queues Required: Yes 00:16:51.178 Arbitration Mechanisms Supported 00:16:51.178 Weighted Round Robin: Not Supported 00:16:51.178 Vendor Specific: Not Supported 00:16:51.178 Reset Timeout: 15000 ms 00:16:51.178 Doorbell Stride: 4 bytes 00:16:51.178 NVM Subsystem Reset: Not Supported 00:16:51.178 Command Sets Supported 00:16:51.178 NVM Command Set: Supported 00:16:51.178 Boot Partition: Not Supported 00:16:51.178 Memory Page Size Minimum: 4096 bytes 00:16:51.178 Memory Page Size Maximum: 4096 bytes 00:16:51.178 Persistent Memory Region: Not Supported 00:16:51.178 Optional Asynchronous Events Supported 00:16:51.178 Namespace Attribute Notices: Supported 00:16:51.178 Firmware Activation Notices: Not Supported 00:16:51.178 ANA Change Notices: Not Supported 00:16:51.178 PLE Aggregate Log Change Notices: Not Supported 00:16:51.178 LBA Status Info Alert Notices: Not Supported 00:16:51.178 EGE Aggregate Log Change Notices: Not Supported 00:16:51.178 Normal NVM Subsystem Shutdown event: Not Supported 00:16:51.178 Zone Descriptor Change Notices: Not Supported 00:16:51.178 Discovery Log Change Notices: Not Supported 00:16:51.178 Controller Attributes 00:16:51.178 128-bit Host Identifier: Supported 00:16:51.178 Non-Operational Permissive Mode: Not Supported 00:16:51.178 NVM Sets: Not Supported 00:16:51.178 Read Recovery Levels: Not Supported 00:16:51.178 Endurance Groups: Not Supported 00:16:51.178 Predictable Latency Mode: Not Supported 00:16:51.178 Traffic Based Keep ALive: Not Supported 00:16:51.178 Namespace Granularity: Not Supported 00:16:51.178 SQ Associations: Not Supported 00:16:51.178 UUID List: Not Supported 00:16:51.178 Multi-Domain Subsystem: Not Supported 00:16:51.178 Fixed Capacity Management: Not Supported 00:16:51.178 Variable Capacity Management: Not Supported 00:16:51.178 Delete Endurance Group: Not Supported 00:16:51.178 Delete NVM Set: Not Supported 00:16:51.178 Extended LBA Formats Supported: Not Supported 00:16:51.178 Flexible Data Placement Supported: Not Supported 00:16:51.178 00:16:51.178 Controller Memory Buffer Support 00:16:51.178 ================================ 00:16:51.178 Supported: No 00:16:51.178 00:16:51.178 Persistent Memory Region Support 00:16:51.178 ================================ 00:16:51.178 Supported: No 00:16:51.178 00:16:51.178 Admin Command Set Attributes 00:16:51.178 ============================ 00:16:51.178 Security Send/Receive: Not Supported 00:16:51.178 Format NVM: Not Supported 00:16:51.178 Firmware Activate/Download: Not Supported 00:16:51.178 Namespace Management: Not Supported 00:16:51.178 Device Self-Test: Not Supported 00:16:51.178 Directives: Not Supported 00:16:51.178 NVMe-MI: Not Supported 00:16:51.178 Virtualization Management: Not Supported 00:16:51.178 Doorbell Buffer Config: Not Supported 00:16:51.178 Get LBA Status Capability: Not Supported 00:16:51.178 Command & Feature Lockdown Capability: Not Supported 00:16:51.178 Abort Command Limit: 4 00:16:51.178 Async Event Request Limit: 4 00:16:51.178 Number of Firmware Slots: N/A 00:16:51.178 Firmware Slot 1 Read-Only: N/A 00:16:51.178 Firmware Activation Without Reset: N/A 00:16:51.178 Multiple Update Detection Support: N/A 00:16:51.178 Firmware Update Granularity: No Information Provided 00:16:51.178 Per-Namespace SMART Log: No 00:16:51.178 Asymmetric Namespace Access Log Page: Not Supported 00:16:51.178 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:51.178 Command Effects Log Page: Supported 00:16:51.178 Get Log Page Extended Data: Supported 00:16:51.178 Telemetry Log Pages: Not Supported 00:16:51.178 Persistent Event Log Pages: Not Supported 00:16:51.178 Supported Log Pages Log Page: May Support 00:16:51.178 Commands Supported & Effects Log Page: Not Supported 00:16:51.178 Feature Identifiers & Effects Log Page:May Support 00:16:51.178 NVMe-MI Commands & Effects Log Page: May Support 00:16:51.178 Data Area 4 for Telemetry Log: Not Supported 00:16:51.178 Error Log Page Entries Supported: 128 00:16:51.179 Keep Alive: Supported 00:16:51.179 Keep Alive Granularity: 10000 ms 00:16:51.179 00:16:51.179 NVM Command Set Attributes 00:16:51.179 ========================== 00:16:51.179 Submission Queue Entry Size 00:16:51.179 Max: 64 00:16:51.179 Min: 64 00:16:51.179 Completion Queue Entry Size 00:16:51.179 Max: 16 00:16:51.179 Min: 16 00:16:51.179 Number of Namespaces: 32 00:16:51.179 Compare Command: Supported 00:16:51.179 Write Uncorrectable Command: Not Supported 00:16:51.179 Dataset Management Command: Supported 00:16:51.179 Write Zeroes Command: Supported 00:16:51.179 Set Features Save Field: Not Supported 00:16:51.179 Reservations: Not Supported 00:16:51.179 Timestamp: Not Supported 00:16:51.179 Copy: Supported 00:16:51.179 Volatile Write Cache: Present 00:16:51.179 Atomic Write Unit (Normal): 1 00:16:51.179 Atomic Write Unit (PFail): 1 00:16:51.179 Atomic Compare & Write Unit: 1 00:16:51.179 Fused Compare & Write: Supported 00:16:51.179 Scatter-Gather List 00:16:51.179 SGL Command Set: Supported (Dword aligned) 00:16:51.179 SGL Keyed: Not Supported 00:16:51.179 SGL Bit Bucket Descriptor: Not Supported 00:16:51.179 SGL Metadata Pointer: Not Supported 00:16:51.179 Oversized SGL: Not Supported 00:16:51.179 SGL Metadata Address: Not Supported 00:16:51.179 SGL Offset: Not Supported 00:16:51.179 Transport SGL Data Block: Not Supported 00:16:51.179 Replay Protected Memory Block: Not Supported 00:16:51.179 00:16:51.179 Firmware Slot Information 00:16:51.179 ========================= 00:16:51.179 Active slot: 1 00:16:51.179 Slot 1 Firmware Revision: 24.09 00:16:51.179 00:16:51.179 00:16:51.179 Commands Supported and Effects 00:16:51.179 ============================== 00:16:51.179 Admin Commands 00:16:51.179 -------------- 00:16:51.179 Get Log Page (02h): Supported 00:16:51.179 Identify (06h): Supported 00:16:51.179 Abort (08h): Supported 00:16:51.179 Set Features (09h): Supported 00:16:51.179 Get Features (0Ah): Supported 00:16:51.179 Asynchronous Event Request (0Ch): Supported 00:16:51.179 Keep Alive (18h): Supported 00:16:51.179 I/O Commands 00:16:51.179 ------------ 00:16:51.179 Flush (00h): Supported LBA-Change 00:16:51.179 Write (01h): Supported LBA-Change 00:16:51.179 Read (02h): Supported 00:16:51.179 Compare (05h): Supported 00:16:51.179 Write Zeroes (08h): Supported LBA-Change 00:16:51.179 Dataset Management (09h): Supported LBA-Change 00:16:51.179 Copy (19h): Supported LBA-Change 00:16:51.179 00:16:51.179 Error Log 00:16:51.179 ========= 00:16:51.179 00:16:51.179 Arbitration 00:16:51.179 =========== 00:16:51.179 Arbitration Burst: 1 00:16:51.179 00:16:51.179 Power Management 00:16:51.179 ================ 00:16:51.179 Number of Power States: 1 00:16:51.179 Current Power State: Power State #0 00:16:51.179 Power State #0: 00:16:51.179 Max Power: 0.00 W 00:16:51.179 Non-Operational State: Operational 00:16:51.179 Entry Latency: Not Reported 00:16:51.179 Exit Latency: Not Reported 00:16:51.179 Relative Read Throughput: 0 00:16:51.179 Relative Read Latency: 0 00:16:51.179 Relative Write Throughput: 0 00:16:51.179 Relative Write Latency: 0 00:16:51.179 Idle Power: Not Reported 00:16:51.179 Active Power: Not Reported 00:16:51.179 Non-Operational Permissive Mode: Not Supported 00:16:51.179 00:16:51.179 Health Information 00:16:51.179 ================== 00:16:51.179 Critical Warnings: 00:16:51.179 Available Spare Space: OK 00:16:51.179 Temperature: OK 00:16:51.179 Device Reliability: OK 00:16:51.179 Read Only: No 00:16:51.179 Volatile Memory Backup: OK 00:16:51.179 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:51.179 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:51.179 Available Spare: 0% 00:16:51.179 Available Sp[2024-07-27 02:16:19.218256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:51.179 [2024-07-27 02:16:19.226085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:51.179 [2024-07-27 02:16:19.226135] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:51.179 [2024-07-27 02:16:19.226153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.179 [2024-07-27 02:16:19.226164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.179 [2024-07-27 02:16:19.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.179 [2024-07-27 02:16:19.226184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.179 [2024-07-27 02:16:19.226267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:51.179 [2024-07-27 02:16:19.226288] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:51.179 [2024-07-27 02:16:19.227267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.179 [2024-07-27 02:16:19.227336] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:51.179 [2024-07-27 02:16:19.227351] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:51.179 [2024-07-27 02:16:19.228284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:51.179 [2024-07-27 02:16:19.228308] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:51.179 [2024-07-27 02:16:19.228375] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:51.179 [2024-07-27 02:16:19.231072] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:51.179 are Threshold: 0% 00:16:51.179 Life Percentage Used: 0% 00:16:51.179 Data Units Read: 0 00:16:51.179 Data Units Written: 0 00:16:51.179 Host Read Commands: 0 00:16:51.179 Host Write Commands: 0 00:16:51.179 Controller Busy Time: 0 minutes 00:16:51.179 Power Cycles: 0 00:16:51.179 Power On Hours: 0 hours 00:16:51.179 Unsafe Shutdowns: 0 00:16:51.179 Unrecoverable Media Errors: 0 00:16:51.179 Lifetime Error Log Entries: 0 00:16:51.179 Warning Temperature Time: 0 minutes 00:16:51.179 Critical Temperature Time: 0 minutes 00:16:51.179 00:16:51.179 Number of Queues 00:16:51.179 ================ 00:16:51.179 Number of I/O Submission Queues: 127 00:16:51.179 Number of I/O Completion Queues: 127 00:16:51.179 00:16:51.179 Active Namespaces 00:16:51.179 ================= 00:16:51.179 Namespace ID:1 00:16:51.179 Error Recovery Timeout: Unlimited 00:16:51.179 Command Set Identifier: NVM (00h) 00:16:51.179 Deallocate: Supported 00:16:51.179 Deallocated/Unwritten Error: Not Supported 00:16:51.179 Deallocated Read Value: Unknown 00:16:51.179 Deallocate in Write Zeroes: Not Supported 00:16:51.179 Deallocated Guard Field: 0xFFFF 00:16:51.179 Flush: Supported 00:16:51.179 Reservation: Supported 00:16:51.179 Namespace Sharing Capabilities: Multiple Controllers 00:16:51.179 Size (in LBAs): 131072 (0GiB) 00:16:51.179 Capacity (in LBAs): 131072 (0GiB) 00:16:51.179 Utilization (in LBAs): 131072 (0GiB) 00:16:51.179 NGUID: AB68D9FC47EC4849BBACF5F872CB1169 00:16:51.179 UUID: ab68d9fc-47ec-4849-bbac-f5f872cb1169 00:16:51.179 Thin Provisioning: Not Supported 00:16:51.179 Per-NS Atomic Units: Yes 00:16:51.179 Atomic Boundary Size (Normal): 0 00:16:51.179 Atomic Boundary Size (PFail): 0 00:16:51.179 Atomic Boundary Offset: 0 00:16:51.179 Maximum Single Source Range Length: 65535 00:16:51.179 Maximum Copy Length: 65535 00:16:51.179 Maximum Source Range Count: 1 00:16:51.179 NGUID/EUI64 Never Reused: No 00:16:51.179 Namespace Write Protected: No 00:16:51.179 Number of LBA Formats: 1 00:16:51.179 Current LBA Format: LBA Format #00 00:16:51.179 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:51.179 00:16:51.180 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:51.180 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.438 [2024-07-27 02:16:19.460100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.702 Initializing NVMe Controllers 00:16:56.702 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:56.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:56.702 Initialization complete. Launching workers. 00:16:56.702 ======================================================== 00:16:56.702 Latency(us) 00:16:56.702 Device Information : IOPS MiB/s Average min max 00:16:56.702 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33076.00 129.20 3869.20 1209.49 7659.24 00:16:56.703 ======================================================== 00:16:56.703 Total : 33076.00 129.20 3869.20 1209.49 7659.24 00:16:56.703 00:16:56.703 [2024-07-27 02:16:24.567420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.703 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:56.703 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.703 [2024-07-27 02:16:24.802064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:01.964 Initializing NVMe Controllers 00:17:01.964 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:01.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:01.964 Initialization complete. Launching workers. 00:17:01.964 ======================================================== 00:17:01.964 Latency(us) 00:17:01.964 Device Information : IOPS MiB/s Average min max 00:17:01.964 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30988.98 121.05 4130.02 1230.83 7529.52 00:17:01.964 ======================================================== 00:17:01.964 Total : 30988.98 121.05 4130.02 1230.83 7529.52 00:17:01.965 00:17:01.965 [2024-07-27 02:16:29.827468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:01.965 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:01.965 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.965 [2024-07-27 02:16:30.036462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:07.222 [2024-07-27 02:16:35.184229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:07.222 Initializing NVMe Controllers 00:17:07.222 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:07.222 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:07.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:07.223 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:07.223 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:07.223 Initialization complete. Launching workers. 00:17:07.223 Starting thread on core 2 00:17:07.223 Starting thread on core 3 00:17:07.223 Starting thread on core 1 00:17:07.223 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:07.223 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.482 [2024-07-27 02:16:35.476949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:10.794 [2024-07-27 02:16:38.561222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:10.794 Initializing NVMe Controllers 00:17:10.794 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.794 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.794 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:10.794 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:10.794 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:10.794 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:10.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:10.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:10.794 Initialization complete. Launching workers. 00:17:10.794 Starting thread on core 1 with urgent priority queue 00:17:10.794 Starting thread on core 2 with urgent priority queue 00:17:10.794 Starting thread on core 3 with urgent priority queue 00:17:10.794 Starting thread on core 0 with urgent priority queue 00:17:10.794 SPDK bdev Controller (SPDK2 ) core 0: 5494.00 IO/s 18.20 secs/100000 ios 00:17:10.794 SPDK bdev Controller (SPDK2 ) core 1: 6177.67 IO/s 16.19 secs/100000 ios 00:17:10.794 SPDK bdev Controller (SPDK2 ) core 2: 6021.00 IO/s 16.61 secs/100000 ios 00:17:10.794 SPDK bdev Controller (SPDK2 ) core 3: 5642.67 IO/s 17.72 secs/100000 ios 00:17:10.794 ======================================================== 00:17:10.794 00:17:10.794 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:10.794 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.794 [2024-07-27 02:16:38.853641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:10.794 Initializing NVMe Controllers 00:17:10.794 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.794 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.794 Namespace ID: 1 size: 0GB 00:17:10.794 Initialization complete. 00:17:10.794 INFO: using host memory buffer for IO 00:17:10.794 Hello world! 00:17:10.794 [2024-07-27 02:16:38.862700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:10.794 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:11.052 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.052 [2024-07-27 02:16:39.152953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:12.423 Initializing NVMe Controllers 00:17:12.423 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:12.423 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:12.423 Initialization complete. Launching workers. 00:17:12.423 submit (in ns) avg, min, max = 7422.5, 3526.7, 4024218.9 00:17:12.423 complete (in ns) avg, min, max = 27055.5, 2075.6, 6011525.6 00:17:12.423 00:17:12.423 Submit histogram 00:17:12.423 ================ 00:17:12.423 Range in us Cumulative Count 00:17:12.423 3.508 - 3.532: 0.0311% ( 4) 00:17:12.423 3.532 - 3.556: 0.4126% ( 49) 00:17:12.423 3.556 - 3.579: 1.2301% ( 105) 00:17:12.423 3.579 - 3.603: 3.4802% ( 289) 00:17:12.423 3.603 - 3.627: 7.6845% ( 540) 00:17:12.423 3.627 - 3.650: 14.4503% ( 869) 00:17:12.423 3.650 - 3.674: 22.2983% ( 1008) 00:17:12.423 3.674 - 3.698: 29.3367% ( 904) 00:17:12.423 3.698 - 3.721: 37.4727% ( 1045) 00:17:12.423 3.721 - 3.745: 44.2541% ( 871) 00:17:12.423 3.745 - 3.769: 50.7319% ( 832) 00:17:12.423 3.769 - 3.793: 55.3332% ( 591) 00:17:12.423 3.793 - 3.816: 58.9380% ( 463) 00:17:12.423 3.816 - 3.840: 62.3482% ( 438) 00:17:12.423 3.840 - 3.864: 66.4123% ( 522) 00:17:12.423 3.864 - 3.887: 71.0916% ( 601) 00:17:12.423 3.887 - 3.911: 75.4438% ( 559) 00:17:12.424 3.911 - 3.935: 79.7182% ( 549) 00:17:12.424 3.935 - 3.959: 82.8558% ( 403) 00:17:12.424 3.959 - 3.982: 85.5886% ( 351) 00:17:12.424 3.982 - 4.006: 87.5195% ( 248) 00:17:12.424 4.006 - 4.030: 89.0143% ( 192) 00:17:12.424 4.030 - 4.053: 90.4469% ( 184) 00:17:12.424 4.053 - 4.077: 91.6537% ( 155) 00:17:12.424 4.077 - 4.101: 92.7126% ( 136) 00:17:12.424 4.101 - 4.124: 93.7247% ( 130) 00:17:12.424 4.124 - 4.148: 94.5344% ( 104) 00:17:12.424 4.148 - 4.172: 95.1573% ( 80) 00:17:12.424 4.172 - 4.196: 95.5699% ( 53) 00:17:12.424 4.196 - 4.219: 95.9592% ( 50) 00:17:12.424 4.219 - 4.243: 96.1928% ( 30) 00:17:12.424 4.243 - 4.267: 96.3952% ( 26) 00:17:12.424 4.267 - 4.290: 96.5743% ( 23) 00:17:12.424 4.290 - 4.314: 96.7066% ( 17) 00:17:12.424 4.314 - 4.338: 96.8078% ( 13) 00:17:12.424 4.338 - 4.361: 96.9168% ( 14) 00:17:12.424 4.361 - 4.385: 96.9558% ( 5) 00:17:12.424 4.385 - 4.409: 97.0103% ( 7) 00:17:12.424 4.409 - 4.433: 97.0570% ( 6) 00:17:12.424 4.433 - 4.456: 97.0881% ( 4) 00:17:12.424 4.456 - 4.480: 97.1426% ( 7) 00:17:12.424 4.480 - 4.504: 97.1660% ( 3) 00:17:12.424 4.504 - 4.527: 97.1971% ( 4) 00:17:12.424 4.527 - 4.551: 97.2049% ( 1) 00:17:12.424 4.551 - 4.575: 97.2283% ( 3) 00:17:12.424 4.599 - 4.622: 97.2361% ( 1) 00:17:12.424 4.622 - 4.646: 97.2594% ( 3) 00:17:12.424 4.670 - 4.693: 97.2672% ( 1) 00:17:12.424 4.693 - 4.717: 97.2983% ( 4) 00:17:12.424 4.717 - 4.741: 97.3528% ( 7) 00:17:12.424 4.741 - 4.764: 97.3684% ( 2) 00:17:12.424 4.764 - 4.788: 97.4385% ( 9) 00:17:12.424 4.788 - 4.812: 97.5241% ( 11) 00:17:12.424 4.812 - 4.836: 97.5553% ( 4) 00:17:12.424 4.836 - 4.859: 97.6331% ( 10) 00:17:12.424 4.859 - 4.883: 97.6954% ( 8) 00:17:12.424 4.883 - 4.907: 97.7344% ( 5) 00:17:12.424 4.907 - 4.930: 97.7733% ( 5) 00:17:12.424 4.930 - 4.954: 97.8122% ( 5) 00:17:12.424 4.954 - 4.978: 97.8434% ( 4) 00:17:12.424 4.978 - 5.001: 97.8901% ( 6) 00:17:12.424 5.001 - 5.025: 97.9446% ( 7) 00:17:12.424 5.025 - 5.049: 97.9524% ( 1) 00:17:12.424 5.049 - 5.073: 97.9913% ( 5) 00:17:12.424 5.073 - 5.096: 98.0146% ( 3) 00:17:12.424 5.096 - 5.120: 98.0458% ( 4) 00:17:12.424 5.120 - 5.144: 98.0536% ( 1) 00:17:12.424 5.144 - 5.167: 98.0847% ( 4) 00:17:12.424 5.167 - 5.191: 98.1003% ( 2) 00:17:12.424 5.191 - 5.215: 98.1159% ( 2) 00:17:12.424 5.215 - 5.239: 98.1314% ( 2) 00:17:12.424 5.239 - 5.262: 98.1392% ( 1) 00:17:12.424 5.262 - 5.286: 98.1470% ( 1) 00:17:12.424 5.286 - 5.310: 98.1548% ( 1) 00:17:12.424 5.404 - 5.428: 98.1626% ( 1) 00:17:12.424 5.452 - 5.476: 98.1704% ( 1) 00:17:12.424 5.476 - 5.499: 98.1781% ( 1) 00:17:12.424 5.523 - 5.547: 98.1859% ( 1) 00:17:12.424 5.547 - 5.570: 98.1937% ( 1) 00:17:12.424 5.641 - 5.665: 98.2015% ( 1) 00:17:12.424 5.807 - 5.831: 98.2093% ( 1) 00:17:12.424 5.855 - 5.879: 98.2171% ( 1) 00:17:12.424 5.902 - 5.926: 98.2249% ( 1) 00:17:12.424 6.210 - 6.258: 98.2326% ( 1) 00:17:12.424 6.305 - 6.353: 98.2404% ( 1) 00:17:12.424 6.353 - 6.400: 98.2560% ( 2) 00:17:12.424 6.495 - 6.542: 98.2638% ( 1) 00:17:12.424 6.590 - 6.637: 98.2716% ( 1) 00:17:12.424 6.684 - 6.732: 98.2871% ( 2) 00:17:12.424 6.827 - 6.874: 98.2949% ( 1) 00:17:12.424 6.874 - 6.921: 98.3027% ( 1) 00:17:12.424 6.921 - 6.969: 98.3105% ( 1) 00:17:12.424 7.064 - 7.111: 98.3183% ( 1) 00:17:12.424 7.159 - 7.206: 98.3261% ( 1) 00:17:12.424 7.253 - 7.301: 98.3416% ( 2) 00:17:12.424 7.396 - 7.443: 98.3572% ( 2) 00:17:12.424 7.443 - 7.490: 98.3650% ( 1) 00:17:12.424 7.490 - 7.538: 98.3728% ( 1) 00:17:12.424 7.538 - 7.585: 98.3806% ( 1) 00:17:12.424 7.775 - 7.822: 98.3884% ( 1) 00:17:12.424 7.822 - 7.870: 98.4039% ( 2) 00:17:12.424 7.870 - 7.917: 98.4195% ( 2) 00:17:12.424 7.917 - 7.964: 98.4273% ( 1) 00:17:12.424 8.012 - 8.059: 98.4351% ( 1) 00:17:12.424 8.107 - 8.154: 98.4506% ( 2) 00:17:12.424 8.154 - 8.201: 98.4584% ( 1) 00:17:12.424 8.201 - 8.249: 98.4740% ( 2) 00:17:12.424 8.249 - 8.296: 98.4818% ( 1) 00:17:12.424 8.296 - 8.344: 98.4896% ( 1) 00:17:12.424 8.439 - 8.486: 98.4974% ( 1) 00:17:12.424 8.533 - 8.581: 98.5051% ( 1) 00:17:12.424 8.581 - 8.628: 98.5129% ( 1) 00:17:12.424 8.628 - 8.676: 98.5207% ( 1) 00:17:12.424 8.770 - 8.818: 98.5285% ( 1) 00:17:12.424 8.818 - 8.865: 98.5363% ( 1) 00:17:12.424 8.865 - 8.913: 98.5596% ( 3) 00:17:12.424 9.007 - 9.055: 98.5674% ( 1) 00:17:12.424 9.055 - 9.102: 98.5752% ( 1) 00:17:12.424 9.150 - 9.197: 98.5908% ( 2) 00:17:12.424 9.197 - 9.244: 98.6064% ( 2) 00:17:12.424 9.244 - 9.292: 98.6219% ( 2) 00:17:12.424 9.387 - 9.434: 98.6297% ( 1) 00:17:12.424 9.434 - 9.481: 98.6375% ( 1) 00:17:12.424 9.481 - 9.529: 98.6453% ( 1) 00:17:12.424 9.576 - 9.624: 98.6531% ( 1) 00:17:12.424 9.671 - 9.719: 98.6609% ( 1) 00:17:12.424 9.719 - 9.766: 98.6686% ( 1) 00:17:12.424 9.766 - 9.813: 98.6764% ( 1) 00:17:12.424 9.861 - 9.908: 98.6842% ( 1) 00:17:12.424 9.956 - 10.003: 98.7076% ( 3) 00:17:12.424 10.193 - 10.240: 98.7154% ( 1) 00:17:12.424 10.240 - 10.287: 98.7309% ( 2) 00:17:12.424 10.287 - 10.335: 98.7387% ( 1) 00:17:12.424 10.382 - 10.430: 98.7465% ( 1) 00:17:12.424 10.430 - 10.477: 98.7543% ( 1) 00:17:12.424 10.572 - 10.619: 98.7699% ( 2) 00:17:12.424 11.141 - 11.188: 98.7776% ( 1) 00:17:12.424 11.330 - 11.378: 98.7854% ( 1) 00:17:12.424 11.425 - 11.473: 98.7932% ( 1) 00:17:12.424 11.757 - 11.804: 98.8010% ( 1) 00:17:12.424 11.804 - 11.852: 98.8166% ( 2) 00:17:12.424 11.899 - 11.947: 98.8321% ( 2) 00:17:12.424 11.994 - 12.041: 98.8399% ( 1) 00:17:12.424 12.136 - 12.231: 98.8555% ( 2) 00:17:12.424 12.231 - 12.326: 98.8711% ( 2) 00:17:12.424 12.516 - 12.610: 98.8944% ( 3) 00:17:12.424 12.800 - 12.895: 98.9022% ( 1) 00:17:12.424 12.895 - 12.990: 98.9100% ( 1) 00:17:12.424 12.990 - 13.084: 98.9178% ( 1) 00:17:12.424 13.179 - 13.274: 98.9256% ( 1) 00:17:12.424 13.274 - 13.369: 98.9411% ( 2) 00:17:12.424 13.464 - 13.559: 98.9489% ( 1) 00:17:12.424 13.559 - 13.653: 98.9567% ( 1) 00:17:12.424 13.653 - 13.748: 98.9723% ( 2) 00:17:12.424 13.748 - 13.843: 98.9956% ( 3) 00:17:12.424 14.033 - 14.127: 99.0034% ( 1) 00:17:12.424 14.127 - 14.222: 99.0190% ( 2) 00:17:12.424 14.317 - 14.412: 99.0268% ( 1) 00:17:12.424 14.696 - 14.791: 99.0346% ( 1) 00:17:12.424 15.076 - 15.170: 99.0424% ( 1) 00:17:12.424 15.265 - 15.360: 99.0501% ( 1) 00:17:12.424 17.067 - 17.161: 99.0579% ( 1) 00:17:12.424 17.161 - 17.256: 99.0813% ( 3) 00:17:12.424 17.256 - 17.351: 99.0891% ( 1) 00:17:12.424 17.351 - 17.446: 99.0969% ( 1) 00:17:12.424 17.446 - 17.541: 99.1124% ( 2) 00:17:12.424 17.541 - 17.636: 99.1436% ( 4) 00:17:12.424 17.636 - 17.730: 99.1981% ( 7) 00:17:12.424 17.730 - 17.825: 99.2604% ( 8) 00:17:12.424 17.825 - 17.920: 99.2837% ( 3) 00:17:12.424 17.920 - 18.015: 99.3071% ( 3) 00:17:12.424 18.015 - 18.110: 99.3304% ( 3) 00:17:12.424 18.110 - 18.204: 99.4316% ( 13) 00:17:12.424 18.204 - 18.299: 99.4861% ( 7) 00:17:12.424 18.299 - 18.394: 99.5718% ( 11) 00:17:12.424 18.394 - 18.489: 99.6185% ( 6) 00:17:12.424 18.489 - 18.584: 99.6574% ( 5) 00:17:12.424 18.584 - 18.679: 99.7353% ( 10) 00:17:12.424 18.679 - 18.773: 99.7431% ( 1) 00:17:12.424 18.773 - 18.868: 99.7664% ( 3) 00:17:12.424 18.868 - 18.963: 99.7976% ( 4) 00:17:12.424 18.963 - 19.058: 99.8054% ( 1) 00:17:12.424 19.342 - 19.437: 99.8131% ( 1) 00:17:12.425 20.101 - 20.196: 99.8209% ( 1) 00:17:12.425 21.239 - 21.333: 99.8287% ( 1) 00:17:12.425 21.807 - 21.902: 99.8365% ( 1) 00:17:12.425 24.178 - 24.273: 99.8443% ( 1) 00:17:12.425 24.462 - 24.652: 99.8521% ( 1) 00:17:12.425 24.652 - 24.841: 99.8599% ( 1) 00:17:12.425 25.410 - 25.600: 99.8676% ( 1) 00:17:12.425 26.738 - 26.927: 99.8754% ( 1) 00:17:12.425 27.307 - 27.496: 99.8832% ( 1) 00:17:12.425 27.876 - 28.065: 99.8910% ( 1) 00:17:12.425 28.444 - 28.634: 99.8988% ( 1) 00:17:12.425 29.772 - 29.961: 99.9066% ( 1) 00:17:12.425 32.616 - 32.806: 99.9144% ( 1) 00:17:12.425 3980.705 - 4004.978: 99.9689% ( 7) 00:17:12.425 4004.978 - 4029.250: 100.0000% ( 4) 00:17:12.425 00:17:12.425 Complete histogram 00:17:12.425 ================== 00:17:12.425 Range in us Cumulative Count 00:17:12.425 2.074 - 2.086: 3.3712% ( 433) 00:17:12.425 2.086 - 2.098: 36.6475% ( 4274) 00:17:12.425 2.098 - 2.110: 47.2750% ( 1365) 00:17:12.425 2.110 - 2.121: 50.9031% ( 466) 00:17:12.425 2.121 - 2.133: 57.6767% ( 870) 00:17:12.425 2.133 - 2.145: 59.8801% ( 283) 00:17:12.425 2.145 - 2.157: 63.5628% ( 473) 00:17:12.425 2.157 - 2.169: 72.7110% ( 1175) 00:17:12.425 2.169 - 2.181: 74.3849% ( 215) 00:17:12.425 2.181 - 2.193: 76.2769% ( 243) 00:17:12.425 2.193 - 2.204: 79.2121% ( 377) 00:17:12.425 2.204 - 2.216: 79.8194% ( 78) 00:17:12.425 2.216 - 2.228: 81.6023% ( 229) 00:17:12.425 2.228 - 2.240: 87.4105% ( 746) 00:17:12.425 2.240 - 2.252: 90.1121% ( 347) 00:17:12.425 2.252 - 2.264: 91.2021% ( 140) 00:17:12.425 2.264 - 2.276: 92.9539% ( 225) 00:17:12.425 2.276 - 2.287: 93.5301% ( 74) 00:17:12.425 2.287 - 2.299: 93.8337% ( 39) 00:17:12.425 2.299 - 2.311: 94.3709% ( 69) 00:17:12.425 2.311 - 2.323: 95.2118% ( 108) 00:17:12.425 2.323 - 2.335: 95.3519% ( 18) 00:17:12.425 2.335 - 2.347: 95.3831% ( 4) 00:17:12.425 2.347 - 2.359: 95.4687% ( 11) 00:17:12.425 2.359 - 2.370: 95.6244% ( 20) 00:17:12.425 2.370 - 2.382: 95.7646% ( 18) 00:17:12.425 2.382 - 2.394: 96.3718% ( 78) 00:17:12.425 2.394 - 2.406: 96.8390% ( 60) 00:17:12.425 2.406 - 2.418: 97.0492% ( 27) 00:17:12.425 2.418 - 2.430: 97.2672% ( 28) 00:17:12.425 2.430 - 2.441: 97.5008% ( 30) 00:17:12.425 2.441 - 2.453: 97.6020% ( 13) 00:17:12.425 2.453 - 2.465: 97.7344% ( 17) 00:17:12.425 2.465 - 2.477: 97.8667% ( 17) 00:17:12.425 2.477 - 2.489: 97.9835% ( 15) 00:17:12.425 2.489 - 2.501: 98.0847% ( 13) 00:17:12.425 2.501 - 2.513: 98.1704% ( 11) 00:17:12.425 2.513 - 2.524: 98.2093% ( 5) 00:17:12.425 2.524 - 2.536: 98.2404% ( 4) 00:17:12.425 2.536 - 2.548: 98.2716% ( 4) 00:17:12.425 2.584 - 2.596: 98.2794% ( 1) 00:17:12.425 2.631 - 2.643: 98.2871% ( 1) 00:17:12.425 2.643 - 2.655: 98.2949% ( 1) 00:17:12.425 2.655 - 2.667: 98.3105% ( 2) 00:17:12.425 2.679 - 2.690: 98.3183% ( 1) 00:17:12.425 2.714 - 2.726: 98.3261% ( 1) 00:17:12.425 2.738 - 2.750: 98.3339% ( 1) 00:17:12.425 2.809 - 2.821: 98.3416% ( 1) 00:17:12.425 2.892 - 2.904: 98.3494% ( 1) 00:17:12.425 3.295 - 3.319: 98.3806% ( 4) 00:17:12.425 3.319 - 3.342: 98.3884% ( 1) 00:17:12.425 3.390 - 3.413: 98.3961% ( 1) 00:17:12.425 3.413 - 3.437: 98.4195% ( 3) 00:17:12.425 3.437 - 3.461: 9[2024-07-27 02:16:40.247872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:12.425 8.4351% ( 2) 00:17:12.425 3.508 - 3.532: 98.4429% ( 1) 00:17:12.425 3.532 - 3.556: 98.4506% ( 1) 00:17:12.425 3.627 - 3.650: 98.4584% ( 1) 00:17:12.425 3.698 - 3.721: 98.4818% ( 3) 00:17:12.425 3.721 - 3.745: 98.4896% ( 1) 00:17:12.425 3.745 - 3.769: 98.4974% ( 1) 00:17:12.425 3.887 - 3.911: 98.5051% ( 1) 00:17:12.425 3.911 - 3.935: 98.5129% ( 1) 00:17:12.425 4.077 - 4.101: 98.5207% ( 1) 00:17:12.425 4.124 - 4.148: 98.5285% ( 1) 00:17:12.425 4.504 - 4.527: 98.5363% ( 1) 00:17:12.425 5.096 - 5.120: 98.5441% ( 1) 00:17:12.425 5.428 - 5.452: 98.5519% ( 1) 00:17:12.425 5.784 - 5.807: 98.5596% ( 1) 00:17:12.425 6.068 - 6.116: 98.5674% ( 1) 00:17:12.425 6.210 - 6.258: 98.5830% ( 2) 00:17:12.425 6.495 - 6.542: 98.5908% ( 1) 00:17:12.425 6.542 - 6.590: 98.5986% ( 1) 00:17:12.425 6.590 - 6.637: 98.6064% ( 1) 00:17:12.425 6.732 - 6.779: 98.6141% ( 1) 00:17:12.425 6.827 - 6.874: 98.6219% ( 1) 00:17:12.425 6.921 - 6.969: 98.6297% ( 1) 00:17:12.425 6.969 - 7.016: 98.6375% ( 1) 00:17:12.425 7.016 - 7.064: 98.6531% ( 2) 00:17:12.425 7.064 - 7.111: 98.6609% ( 1) 00:17:12.425 7.111 - 7.159: 98.6686% ( 1) 00:17:12.425 7.585 - 7.633: 98.6764% ( 1) 00:17:12.425 7.633 - 7.680: 98.6842% ( 1) 00:17:12.425 7.870 - 7.917: 98.6920% ( 1) 00:17:12.425 12.326 - 12.421: 98.6998% ( 1) 00:17:12.425 14.033 - 14.127: 98.7076% ( 1) 00:17:12.425 15.644 - 15.739: 98.7154% ( 1) 00:17:12.425 15.739 - 15.834: 98.7465% ( 4) 00:17:12.425 15.834 - 15.929: 98.7621% ( 2) 00:17:12.425 15.929 - 16.024: 98.8010% ( 5) 00:17:12.425 16.024 - 16.119: 98.8088% ( 1) 00:17:12.425 16.119 - 16.213: 98.8477% ( 5) 00:17:12.425 16.213 - 16.308: 98.8789% ( 4) 00:17:12.425 16.308 - 16.403: 98.9100% ( 4) 00:17:12.425 16.403 - 16.498: 98.9567% ( 6) 00:17:12.425 16.498 - 16.593: 99.0190% ( 8) 00:17:12.425 16.593 - 16.687: 99.0735% ( 7) 00:17:12.425 16.687 - 16.782: 99.0969% ( 3) 00:17:12.425 16.782 - 16.877: 99.1669% ( 9) 00:17:12.425 16.877 - 16.972: 99.1903% ( 3) 00:17:12.425 16.972 - 17.067: 99.2136% ( 3) 00:17:12.425 17.067 - 17.161: 99.2292% ( 2) 00:17:12.425 17.161 - 17.256: 99.2604% ( 4) 00:17:12.425 17.256 - 17.351: 99.2837% ( 3) 00:17:12.425 17.351 - 17.446: 99.2915% ( 1) 00:17:12.425 17.446 - 17.541: 99.2993% ( 1) 00:17:12.425 17.636 - 17.730: 99.3149% ( 2) 00:17:12.425 17.730 - 17.825: 99.3226% ( 1) 00:17:12.425 17.825 - 17.920: 99.3304% ( 1) 00:17:12.425 17.920 - 18.015: 99.3616% ( 4) 00:17:12.425 18.015 - 18.110: 99.3694% ( 1) 00:17:12.425 18.299 - 18.394: 99.3771% ( 1) 00:17:12.425 1116.539 - 1122.607: 99.3849% ( 1) 00:17:12.425 3252.527 - 3276.800: 99.3927% ( 1) 00:17:12.425 3980.705 - 4004.978: 99.8599% ( 60) 00:17:12.425 4004.978 - 4029.250: 99.9922% ( 17) 00:17:12.425 5995.330 - 6019.603: 100.0000% ( 1) 00:17:12.425 00:17:12.425 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:12.425 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:12.425 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:12.425 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:12.425 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:12.425 [ 00:17:12.425 { 00:17:12.425 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:12.425 "subtype": "Discovery", 00:17:12.425 "listen_addresses": [], 00:17:12.425 "allow_any_host": true, 00:17:12.425 "hosts": [] 00:17:12.425 }, 00:17:12.425 { 00:17:12.425 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:12.425 "subtype": "NVMe", 00:17:12.425 "listen_addresses": [ 00:17:12.425 { 00:17:12.425 "trtype": "VFIOUSER", 00:17:12.425 "adrfam": "IPv4", 00:17:12.425 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:12.425 "trsvcid": "0" 00:17:12.425 } 00:17:12.425 ], 00:17:12.425 "allow_any_host": true, 00:17:12.425 "hosts": [], 00:17:12.425 "serial_number": "SPDK1", 00:17:12.425 "model_number": "SPDK bdev Controller", 00:17:12.425 "max_namespaces": 32, 00:17:12.425 "min_cntlid": 1, 00:17:12.425 "max_cntlid": 65519, 00:17:12.425 "namespaces": [ 00:17:12.425 { 00:17:12.425 "nsid": 1, 00:17:12.425 "bdev_name": "Malloc1", 00:17:12.425 "name": "Malloc1", 00:17:12.425 "nguid": "5E6FD7D834914924A66B6966BA45E2E8", 00:17:12.425 "uuid": "5e6fd7d8-3491-4924-a66b-6966ba45e2e8" 00:17:12.426 }, 00:17:12.426 { 00:17:12.426 "nsid": 2, 00:17:12.426 "bdev_name": "Malloc3", 00:17:12.426 "name": "Malloc3", 00:17:12.426 "nguid": "AFD17D88131B4C03BE1CCF49CCCDC145", 00:17:12.426 "uuid": "afd17d88-131b-4c03-be1c-cf49cccdc145" 00:17:12.426 } 00:17:12.426 ] 00:17:12.426 }, 00:17:12.426 { 00:17:12.426 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:12.426 "subtype": "NVMe", 00:17:12.426 "listen_addresses": [ 00:17:12.426 { 00:17:12.426 "trtype": "VFIOUSER", 00:17:12.426 "adrfam": "IPv4", 00:17:12.426 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:12.426 "trsvcid": "0" 00:17:12.426 } 00:17:12.426 ], 00:17:12.426 "allow_any_host": true, 00:17:12.426 "hosts": [], 00:17:12.426 "serial_number": "SPDK2", 00:17:12.426 "model_number": "SPDK bdev Controller", 00:17:12.426 "max_namespaces": 32, 00:17:12.426 "min_cntlid": 1, 00:17:12.426 "max_cntlid": 65519, 00:17:12.426 "namespaces": [ 00:17:12.426 { 00:17:12.426 "nsid": 1, 00:17:12.426 "bdev_name": "Malloc2", 00:17:12.426 "name": "Malloc2", 00:17:12.426 "nguid": "AB68D9FC47EC4849BBACF5F872CB1169", 00:17:12.426 "uuid": "ab68d9fc-47ec-4849-bbac-f5f872cb1169" 00:17:12.426 } 00:17:12.426 ] 00:17:12.426 } 00:17:12.426 ] 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1028870 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:12.426 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:12.683 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.683 [2024-07-27 02:16:40.700557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:12.683 Malloc4 00:17:12.683 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:12.940 [2024-07-27 02:16:41.062193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:12.940 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:13.197 Asynchronous Event Request test 00:17:13.197 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:13.197 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:13.197 Registering asynchronous event callbacks... 00:17:13.197 Starting namespace attribute notice tests for all controllers... 00:17:13.197 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:13.197 aer_cb - Changed Namespace 00:17:13.197 Cleaning up... 00:17:13.197 [ 00:17:13.197 { 00:17:13.197 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:13.197 "subtype": "Discovery", 00:17:13.197 "listen_addresses": [], 00:17:13.197 "allow_any_host": true, 00:17:13.197 "hosts": [] 00:17:13.197 }, 00:17:13.197 { 00:17:13.197 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:13.197 "subtype": "NVMe", 00:17:13.197 "listen_addresses": [ 00:17:13.197 { 00:17:13.197 "trtype": "VFIOUSER", 00:17:13.197 "adrfam": "IPv4", 00:17:13.197 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:13.197 "trsvcid": "0" 00:17:13.197 } 00:17:13.197 ], 00:17:13.197 "allow_any_host": true, 00:17:13.197 "hosts": [], 00:17:13.197 "serial_number": "SPDK1", 00:17:13.197 "model_number": "SPDK bdev Controller", 00:17:13.197 "max_namespaces": 32, 00:17:13.197 "min_cntlid": 1, 00:17:13.197 "max_cntlid": 65519, 00:17:13.197 "namespaces": [ 00:17:13.197 { 00:17:13.197 "nsid": 1, 00:17:13.197 "bdev_name": "Malloc1", 00:17:13.197 "name": "Malloc1", 00:17:13.197 "nguid": "5E6FD7D834914924A66B6966BA45E2E8", 00:17:13.197 "uuid": "5e6fd7d8-3491-4924-a66b-6966ba45e2e8" 00:17:13.197 }, 00:17:13.197 { 00:17:13.197 "nsid": 2, 00:17:13.197 "bdev_name": "Malloc3", 00:17:13.197 "name": "Malloc3", 00:17:13.197 "nguid": "AFD17D88131B4C03BE1CCF49CCCDC145", 00:17:13.197 "uuid": "afd17d88-131b-4c03-be1c-cf49cccdc145" 00:17:13.197 } 00:17:13.197 ] 00:17:13.197 }, 00:17:13.197 { 00:17:13.197 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:13.197 "subtype": "NVMe", 00:17:13.197 "listen_addresses": [ 00:17:13.197 { 00:17:13.197 "trtype": "VFIOUSER", 00:17:13.197 "adrfam": "IPv4", 00:17:13.197 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:13.197 "trsvcid": "0" 00:17:13.197 } 00:17:13.197 ], 00:17:13.197 "allow_any_host": true, 00:17:13.198 "hosts": [], 00:17:13.198 "serial_number": "SPDK2", 00:17:13.198 "model_number": "SPDK bdev Controller", 00:17:13.198 "max_namespaces": 32, 00:17:13.198 "min_cntlid": 1, 00:17:13.198 "max_cntlid": 65519, 00:17:13.198 "namespaces": [ 00:17:13.198 { 00:17:13.198 "nsid": 1, 00:17:13.198 "bdev_name": "Malloc2", 00:17:13.198 "name": "Malloc2", 00:17:13.198 "nguid": "AB68D9FC47EC4849BBACF5F872CB1169", 00:17:13.198 "uuid": "ab68d9fc-47ec-4849-bbac-f5f872cb1169" 00:17:13.198 }, 00:17:13.198 { 00:17:13.198 "nsid": 2, 00:17:13.198 "bdev_name": "Malloc4", 00:17:13.198 "name": "Malloc4", 00:17:13.198 "nguid": "218907B8C4D74FD29FBEEAB1823700CA", 00:17:13.198 "uuid": "218907b8-c4d7-4fd2-9fbe-eab1823700ca" 00:17:13.198 } 00:17:13.198 ] 00:17:13.198 } 00:17:13.198 ] 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1028870 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1023374 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1023374 ']' 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1023374 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1023374 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1023374' 00:17:13.198 killing process with pid 1023374 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1023374 00:17:13.198 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1023374 00:17:13.762 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:13.762 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:13.762 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:13.762 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1029011 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1029011' 00:17:13.763 Process pid: 1029011 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1029011 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1029011 ']' 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.763 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:13.763 [2024-07-27 02:16:41.710236] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:13.763 [2024-07-27 02:16:41.711291] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:17:13.763 [2024-07-27 02:16:41.711361] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.763 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.763 [2024-07-27 02:16:41.742766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:13.763 [2024-07-27 02:16:41.769148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.763 [2024-07-27 02:16:41.854254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.763 [2024-07-27 02:16:41.854308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.763 [2024-07-27 02:16:41.854328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.763 [2024-07-27 02:16:41.854340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.763 [2024-07-27 02:16:41.854350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.763 [2024-07-27 02:16:41.854411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.763 [2024-07-27 02:16:41.854468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.763 [2024-07-27 02:16:41.854534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.763 [2024-07-27 02:16:41.854536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.021 [2024-07-27 02:16:41.949468] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:14.021 [2024-07-27 02:16:41.949710] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:14.021 [2024-07-27 02:16:41.949966] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:14.021 [2024-07-27 02:16:41.950526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:14.021 [2024-07-27 02:16:41.950746] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:14.021 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.021 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:14.021 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:14.951 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:15.210 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:15.210 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:15.210 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:15.210 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:15.210 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:15.472 Malloc1 00:17:15.472 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:15.729 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:15.986 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:16.243 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:16.243 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:16.243 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:16.501 Malloc2 00:17:16.501 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:16.758 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:17.016 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1029011 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1029011 ']' 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1029011 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029011 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029011' 00:17:17.273 killing process with pid 1029011 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1029011 00:17:17.273 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1029011 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:17.839 00:17:17.839 real 0m52.413s 00:17:17.839 user 3m26.997s 00:17:17.839 sys 0m4.371s 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:17.839 ************************************ 00:17:17.839 END TEST nvmf_vfio_user 00:17:17.839 ************************************ 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.839 ************************************ 00:17:17.839 START TEST nvmf_vfio_user_nvme_compliance 00:17:17.839 ************************************ 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:17.839 * Looking for test storage... 00:17:17.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.839 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1029606 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1029606' 00:17:17.840 Process pid: 1029606 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1029606 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1029606 ']' 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.840 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:17.840 [2024-07-27 02:16:45.861878] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:17:17.840 [2024-07-27 02:16:45.861967] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.840 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.840 [2024-07-27 02:16:45.895999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:17.840 [2024-07-27 02:16:45.927122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.098 [2024-07-27 02:16:46.018600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.098 [2024-07-27 02:16:46.018657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.098 [2024-07-27 02:16:46.018683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.098 [2024-07-27 02:16:46.018696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.098 [2024-07-27 02:16:46.018708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.098 [2024-07-27 02:16:46.018785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.098 [2024-07-27 02:16:46.018839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.098 [2024-07-27 02:16:46.018843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.098 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.098 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:18.098 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:19.030 malloc0 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.030 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.287 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:19.287 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.287 00:17:19.287 00:17:19.287 CUnit - A unit testing framework for C - Version 2.1-3 00:17:19.287 http://cunit.sourceforge.net/ 00:17:19.287 00:17:19.287 00:17:19.287 Suite: nvme_compliance 00:17:19.287 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-27 02:16:47.367055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.287 [2024-07-27 02:16:47.368529] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:19.287 [2024-07-27 02:16:47.368555] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:19.287 [2024-07-27 02:16:47.368567] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:19.288 [2024-07-27 02:16:47.370078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.288 passed 00:17:19.545 Test: admin_identify_ctrlr_verify_fused ...[2024-07-27 02:16:47.456687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.545 [2024-07-27 02:16:47.459707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.545 passed 00:17:19.545 Test: admin_identify_ns ...[2024-07-27 02:16:47.546602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.545 [2024-07-27 02:16:47.606080] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:19.545 [2024-07-27 02:16:47.614076] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:19.545 [2024-07-27 02:16:47.635212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.545 passed 00:17:19.802 Test: admin_get_features_mandatory_features ...[2024-07-27 02:16:47.722451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.802 [2024-07-27 02:16:47.725474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.802 passed 00:17:19.802 Test: admin_get_features_optional_features ...[2024-07-27 02:16:47.807993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.802 [2024-07-27 02:16:47.814025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.802 passed 00:17:19.802 Test: admin_set_features_number_of_queues ...[2024-07-27 02:16:47.895186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.059 [2024-07-27 02:16:47.999175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.059 passed 00:17:20.059 Test: admin_get_log_page_mandatory_logs ...[2024-07-27 02:16:48.085448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.059 [2024-07-27 02:16:48.088477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.059 passed 00:17:20.059 Test: admin_get_log_page_with_lpo ...[2024-07-27 02:16:48.169532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.316 [2024-07-27 02:16:48.237087] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:20.316 [2024-07-27 02:16:48.250158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.316 passed 00:17:20.316 Test: fabric_property_get ...[2024-07-27 02:16:48.332666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.316 [2024-07-27 02:16:48.333938] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:20.316 [2024-07-27 02:16:48.335688] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.316 passed 00:17:20.316 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-27 02:16:48.419242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.316 [2024-07-27 02:16:48.420524] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:20.316 [2024-07-27 02:16:48.422265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.316 passed 00:17:20.573 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-27 02:16:48.507569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.573 [2024-07-27 02:16:48.591066] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:20.573 [2024-07-27 02:16:48.607087] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:20.573 [2024-07-27 02:16:48.612193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.573 passed 00:17:20.573 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-27 02:16:48.694702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.573 [2024-07-27 02:16:48.695972] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:20.573 [2024-07-27 02:16:48.697724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.573 passed 00:17:20.830 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-27 02:16:48.779838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.830 [2024-07-27 02:16:48.856073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:20.830 [2024-07-27 02:16:48.880070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:20.830 [2024-07-27 02:16:48.885176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.830 passed 00:17:20.830 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-27 02:16:48.967675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.830 [2024-07-27 02:16:48.968963] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:20.830 [2024-07-27 02:16:48.968998] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:20.830 [2024-07-27 02:16:48.970701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:21.087 passed 00:17:21.087 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-27 02:16:49.053795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:21.087 [2024-07-27 02:16:49.144068] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:21.087 [2024-07-27 02:16:49.152066] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:21.087 [2024-07-27 02:16:49.160073] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:21.087 [2024-07-27 02:16:49.168071] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:21.087 [2024-07-27 02:16:49.197170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:21.087 passed 00:17:21.344 Test: admin_create_io_sq_verify_pc ...[2024-07-27 02:16:49.282526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:21.344 [2024-07-27 02:16:49.299083] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:21.344 [2024-07-27 02:16:49.316833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:21.344 passed 00:17:21.344 Test: admin_create_io_qp_max_qps ...[2024-07-27 02:16:49.398382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.715 [2024-07-27 02:16:50.490076] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:22.715 [2024-07-27 02:16:50.869782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.972 passed 00:17:22.972 Test: admin_create_io_sq_shared_cq ...[2024-07-27 02:16:50.952538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.972 [2024-07-27 02:16:51.085067] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:22.972 [2024-07-27 02:16:51.122158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.230 passed 00:17:23.230 00:17:23.230 Run Summary: Type Total Ran Passed Failed Inactive 00:17:23.230 suites 1 1 n/a 0 0 00:17:23.230 tests 18 18 18 0 0 00:17:23.230 asserts 360 360 360 0 n/a 00:17:23.230 00:17:23.230 Elapsed time = 1.554 seconds 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1029606 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1029606 ']' 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1029606 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029606 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029606' 00:17:23.230 killing process with pid 1029606 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1029606 00:17:23.230 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1029606 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:23.488 00:17:23.488 real 0m5.688s 00:17:23.488 user 0m15.985s 00:17:23.488 sys 0m0.550s 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:23.488 ************************************ 00:17:23.488 END TEST nvmf_vfio_user_nvme_compliance 00:17:23.488 ************************************ 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.488 ************************************ 00:17:23.488 START TEST nvmf_vfio_user_fuzz 00:17:23.488 ************************************ 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:23.488 * Looking for test storage... 00:17:23.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.488 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1030329 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1030329' 00:17:23.489 Process pid: 1030329 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1030329 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1030329 ']' 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.489 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.747 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.747 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:23.747 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:25.119 malloc0 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.119 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:25.120 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:57.214 Fuzzing completed. Shutting down the fuzz application 00:17:57.214 00:17:57.214 Dumping successful admin opcodes: 00:17:57.214 8, 9, 10, 24, 00:17:57.214 Dumping successful io opcodes: 00:17:57.214 0, 00:17:57.214 NS: 0x200003a1ef00 I/O qp, Total commands completed: 568240, total successful commands: 2183, random_seed: 500231744 00:17:57.214 NS: 0x200003a1ef00 admin qp, Total commands completed: 131184, total successful commands: 1066, random_seed: 1681675904 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1030329 ']' 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030329' 00:17:57.214 killing process with pid 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1030329 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:57.214 00:17:57.214 real 0m32.209s 00:17:57.214 user 0m31.669s 00:17:57.214 sys 0m28.747s 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:57.214 ************************************ 00:17:57.214 END TEST nvmf_vfio_user_fuzz 00:17:57.214 ************************************ 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.214 ************************************ 00:17:57.214 START TEST nvmf_auth_target 00:17:57.214 ************************************ 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:57.214 * Looking for test storage... 00:17:57.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.214 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.215 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.781 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:17:57.782 00:17:57.782 --- 10.0.0.2 ping statistics --- 00:17:57.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.782 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:57.782 00:17:57.782 --- 10.0.0.1 ping statistics --- 00:17:57.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.782 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.782 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1035770 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1035770 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1035770 ']' 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.783 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1035795 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bcf81056e7d6ae6d90393cbf6527ea227f992fb7c74cfa68 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wgE 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bcf81056e7d6ae6d90393cbf6527ea227f992fb7c74cfa68 0 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bcf81056e7d6ae6d90393cbf6527ea227f992fb7c74cfa68 0 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bcf81056e7d6ae6d90393cbf6527ea227f992fb7c74cfa68 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wgE 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wgE 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.wgE 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:58.041 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=141e2afd4d15b9ea2da42fabb7083721a65abc037e1bc6017061db80e6aad9fe 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.whW 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 141e2afd4d15b9ea2da42fabb7083721a65abc037e1bc6017061db80e6aad9fe 3 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 141e2afd4d15b9ea2da42fabb7083721a65abc037e1bc6017061db80e6aad9fe 3 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=141e2afd4d15b9ea2da42fabb7083721a65abc037e1bc6017061db80e6aad9fe 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.whW 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.whW 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.whW 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=abeeb028aa6b3a16948718b8d1ff853f 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NNI 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key abeeb028aa6b3a16948718b8d1ff853f 1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 abeeb028aa6b3a16948718b8d1ff853f 1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=abeeb028aa6b3a16948718b8d1ff853f 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NNI 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NNI 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.NNI 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=16fb36befa18e9de11c8cf07b8f14a49f065329e99fc0b7a 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.W8w 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 16fb36befa18e9de11c8cf07b8f14a49f065329e99fc0b7a 2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 16fb36befa18e9de11c8cf07b8f14a49f065329e99fc0b7a 2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=16fb36befa18e9de11c8cf07b8f14a49f065329e99fc0b7a 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.W8w 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.W8w 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.W8w 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c115a152657dcfc5491db2c56a4ec09b78ebef94f0aba2e7 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3fM 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c115a152657dcfc5491db2c56a4ec09b78ebef94f0aba2e7 2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c115a152657dcfc5491db2c56a4ec09b78ebef94f0aba2e7 2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c115a152657dcfc5491db2c56a4ec09b78ebef94f0aba2e7 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3fM 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3fM 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3fM 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75f1a78c6be1e3351aa0866676d51814 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nXN 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75f1a78c6be1e3351aa0866676d51814 1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75f1a78c6be1e3351aa0866676d51814 1 00:17:58.300 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.301 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.301 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75f1a78c6be1e3351aa0866676d51814 00:17:58.301 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:58.301 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nXN 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nXN 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.nXN 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e714bf2ac5239e8ce36fc66b89031c4dbf8ca1d6882145c9a34a173e313ebf41 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vq3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e714bf2ac5239e8ce36fc66b89031c4dbf8ca1d6882145c9a34a173e313ebf41 3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e714bf2ac5239e8ce36fc66b89031c4dbf8ca1d6882145c9a34a173e313ebf41 3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e714bf2ac5239e8ce36fc66b89031c4dbf8ca1d6882145c9a34a173e313ebf41 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vq3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vq3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.vq3 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1035770 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1035770 ']' 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.559 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1035795 /var/tmp/host.sock 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1035795 ']' 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:58.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.817 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wgE 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wgE 00:17:59.075 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wgE 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.whW ]] 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.whW 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.whW 00:17:59.334 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.whW 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NNI 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NNI 00:17:59.592 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NNI 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.W8w ]] 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W8w 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W8w 00:17:59.850 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W8w 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3fM 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3fM 00:18:00.108 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3fM 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.nXN ]] 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nXN 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nXN 00:18:00.367 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nXN 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vq3 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vq3 00:18:00.625 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vq3 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.883 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.141 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.399 00:18:01.399 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.399 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.399 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.657 { 00:18:01.657 "cntlid": 1, 00:18:01.657 "qid": 0, 00:18:01.657 "state": "enabled", 00:18:01.657 "thread": "nvmf_tgt_poll_group_000", 00:18:01.657 "listen_address": { 00:18:01.657 "trtype": "TCP", 00:18:01.657 "adrfam": "IPv4", 00:18:01.657 "traddr": "10.0.0.2", 00:18:01.657 "trsvcid": "4420" 00:18:01.657 }, 00:18:01.657 "peer_address": { 00:18:01.657 "trtype": "TCP", 00:18:01.657 "adrfam": "IPv4", 00:18:01.657 "traddr": "10.0.0.1", 00:18:01.657 "trsvcid": "39988" 00:18:01.657 }, 00:18:01.657 "auth": { 00:18:01.657 "state": "completed", 00:18:01.657 "digest": "sha256", 00:18:01.657 "dhgroup": "null" 00:18:01.657 } 00:18:01.657 } 00:18:01.657 ]' 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.657 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.915 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:02.850 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.850 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.850 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.850 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.850 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.851 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.851 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:02.851 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.109 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.675 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.675 { 00:18:03.675 "cntlid": 3, 00:18:03.675 "qid": 0, 00:18:03.675 "state": "enabled", 00:18:03.675 "thread": "nvmf_tgt_poll_group_000", 00:18:03.675 "listen_address": { 00:18:03.675 "trtype": "TCP", 00:18:03.675 "adrfam": "IPv4", 00:18:03.675 "traddr": "10.0.0.2", 00:18:03.675 "trsvcid": "4420" 00:18:03.675 }, 00:18:03.675 "peer_address": { 00:18:03.675 "trtype": "TCP", 00:18:03.675 "adrfam": "IPv4", 00:18:03.675 "traddr": "10.0.0.1", 00:18:03.675 "trsvcid": "40014" 00:18:03.675 }, 00:18:03.675 "auth": { 00:18:03.675 "state": "completed", 00:18:03.675 "digest": "sha256", 00:18:03.675 "dhgroup": "null" 00:18:03.675 } 00:18:03.675 } 00:18:03.675 ]' 00:18:03.675 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.933 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.191 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:05.124 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.124 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.124 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.124 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.125 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.125 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.125 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.125 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.381 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.638 00:18:05.638 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.638 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.638 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.896 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.896 { 00:18:05.896 "cntlid": 5, 00:18:05.896 "qid": 0, 00:18:05.896 "state": "enabled", 00:18:05.896 "thread": "nvmf_tgt_poll_group_000", 00:18:05.896 "listen_address": { 00:18:05.896 "trtype": "TCP", 00:18:05.896 "adrfam": "IPv4", 00:18:05.896 "traddr": "10.0.0.2", 00:18:05.896 "trsvcid": "4420" 00:18:05.896 }, 00:18:05.896 "peer_address": { 00:18:05.896 "trtype": "TCP", 00:18:05.896 "adrfam": "IPv4", 00:18:05.896 "traddr": "10.0.0.1", 00:18:05.896 "trsvcid": "40058" 00:18:05.896 }, 00:18:05.896 "auth": { 00:18:05.897 "state": "completed", 00:18:05.897 "digest": "sha256", 00:18:05.897 "dhgroup": "null" 00:18:05.897 } 00:18:05.897 } 00:18:05.897 ]' 00:18:05.897 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.897 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.897 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.897 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.897 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.154 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.154 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.154 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.412 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:07.346 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.346 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.346 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.659 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.933 00:18:07.933 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.933 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.933 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.192 { 00:18:08.192 "cntlid": 7, 00:18:08.192 "qid": 0, 00:18:08.192 "state": "enabled", 00:18:08.192 "thread": "nvmf_tgt_poll_group_000", 00:18:08.192 "listen_address": { 00:18:08.192 "trtype": "TCP", 00:18:08.192 "adrfam": "IPv4", 00:18:08.192 "traddr": "10.0.0.2", 00:18:08.192 "trsvcid": "4420" 00:18:08.192 }, 00:18:08.192 "peer_address": { 00:18:08.192 "trtype": "TCP", 00:18:08.192 "adrfam": "IPv4", 00:18:08.192 "traddr": "10.0.0.1", 00:18:08.192 "trsvcid": "40094" 00:18:08.192 }, 00:18:08.192 "auth": { 00:18:08.192 "state": "completed", 00:18:08.192 "digest": "sha256", 00:18:08.192 "dhgroup": "null" 00:18:08.192 } 00:18:08.192 } 00:18:08.192 ]' 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.192 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.450 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.385 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.643 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.901 00:18:09.901 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.901 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.901 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.159 { 00:18:10.159 "cntlid": 9, 00:18:10.159 "qid": 0, 00:18:10.159 "state": "enabled", 00:18:10.159 "thread": "nvmf_tgt_poll_group_000", 00:18:10.159 "listen_address": { 00:18:10.159 "trtype": "TCP", 00:18:10.159 "adrfam": "IPv4", 00:18:10.159 "traddr": "10.0.0.2", 00:18:10.159 "trsvcid": "4420" 00:18:10.159 }, 00:18:10.159 "peer_address": { 00:18:10.159 "trtype": "TCP", 00:18:10.159 "adrfam": "IPv4", 00:18:10.159 "traddr": "10.0.0.1", 00:18:10.159 "trsvcid": "40120" 00:18:10.159 }, 00:18:10.159 "auth": { 00:18:10.159 "state": "completed", 00:18:10.159 "digest": "sha256", 00:18:10.159 "dhgroup": "ffdhe2048" 00:18:10.159 } 00:18:10.159 } 00:18:10.159 ]' 00:18:10.159 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.417 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.676 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.608 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.866 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.124 00:18:12.124 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.124 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.124 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.382 { 00:18:12.382 "cntlid": 11, 00:18:12.382 "qid": 0, 00:18:12.382 "state": "enabled", 00:18:12.382 "thread": "nvmf_tgt_poll_group_000", 00:18:12.382 "listen_address": { 00:18:12.382 "trtype": "TCP", 00:18:12.382 "adrfam": "IPv4", 00:18:12.382 "traddr": "10.0.0.2", 00:18:12.382 "trsvcid": "4420" 00:18:12.382 }, 00:18:12.382 "peer_address": { 00:18:12.382 "trtype": "TCP", 00:18:12.382 "adrfam": "IPv4", 00:18:12.382 "traddr": "10.0.0.1", 00:18:12.382 "trsvcid": "39558" 00:18:12.382 }, 00:18:12.382 "auth": { 00:18:12.382 "state": "completed", 00:18:12.382 "digest": "sha256", 00:18:12.382 "dhgroup": "ffdhe2048" 00:18:12.382 } 00:18:12.382 } 00:18:12.382 ]' 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.382 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.639 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.639 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.639 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.639 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.639 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.896 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:13.828 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.829 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.086 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.087 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.344 00:18:14.345 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.345 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.345 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.603 { 00:18:14.603 "cntlid": 13, 00:18:14.603 "qid": 0, 00:18:14.603 "state": "enabled", 00:18:14.603 "thread": "nvmf_tgt_poll_group_000", 00:18:14.603 "listen_address": { 00:18:14.603 "trtype": "TCP", 00:18:14.603 "adrfam": "IPv4", 00:18:14.603 "traddr": "10.0.0.2", 00:18:14.603 "trsvcid": "4420" 00:18:14.603 }, 00:18:14.603 "peer_address": { 00:18:14.603 "trtype": "TCP", 00:18:14.603 "adrfam": "IPv4", 00:18:14.603 "traddr": "10.0.0.1", 00:18:14.603 "trsvcid": "39578" 00:18:14.603 }, 00:18:14.603 "auth": { 00:18:14.603 "state": "completed", 00:18:14.603 "digest": "sha256", 00:18:14.603 "dhgroup": "ffdhe2048" 00:18:14.603 } 00:18:14.603 } 00:18:14.603 ]' 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.603 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.860 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.860 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.860 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.860 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.860 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.117 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.048 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.306 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.564 00:18:16.564 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.564 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.564 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.821 { 00:18:16.821 "cntlid": 15, 00:18:16.821 "qid": 0, 00:18:16.821 "state": "enabled", 00:18:16.821 "thread": "nvmf_tgt_poll_group_000", 00:18:16.821 "listen_address": { 00:18:16.821 "trtype": "TCP", 00:18:16.821 "adrfam": "IPv4", 00:18:16.821 "traddr": "10.0.0.2", 00:18:16.821 "trsvcid": "4420" 00:18:16.821 }, 00:18:16.821 "peer_address": { 00:18:16.821 "trtype": "TCP", 00:18:16.821 "adrfam": "IPv4", 00:18:16.821 "traddr": "10.0.0.1", 00:18:16.821 "trsvcid": "39598" 00:18:16.821 }, 00:18:16.821 "auth": { 00:18:16.821 "state": "completed", 00:18:16.821 "digest": "sha256", 00:18:16.821 "dhgroup": "ffdhe2048" 00:18:16.821 } 00:18:16.821 } 00:18:16.821 ]' 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.821 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.822 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.822 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.822 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.822 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.822 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.079 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.012 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.270 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.527 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.527 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.527 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.527 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.785 00:18:18.785 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.785 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.785 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.043 { 00:18:19.043 "cntlid": 17, 00:18:19.043 "qid": 0, 00:18:19.043 "state": "enabled", 00:18:19.043 "thread": "nvmf_tgt_poll_group_000", 00:18:19.043 "listen_address": { 00:18:19.043 "trtype": "TCP", 00:18:19.043 "adrfam": "IPv4", 00:18:19.043 "traddr": "10.0.0.2", 00:18:19.043 "trsvcid": "4420" 00:18:19.043 }, 00:18:19.043 "peer_address": { 00:18:19.043 "trtype": "TCP", 00:18:19.043 "adrfam": "IPv4", 00:18:19.043 "traddr": "10.0.0.1", 00:18:19.043 "trsvcid": "39614" 00:18:19.043 }, 00:18:19.043 "auth": { 00:18:19.043 "state": "completed", 00:18:19.043 "digest": "sha256", 00:18:19.043 "dhgroup": "ffdhe3072" 00:18:19.043 } 00:18:19.043 } 00:18:19.043 ]' 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.043 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.300 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.231 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.490 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.748 00:18:21.006 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.006 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.006 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.264 { 00:18:21.264 "cntlid": 19, 00:18:21.264 "qid": 0, 00:18:21.264 "state": "enabled", 00:18:21.264 "thread": "nvmf_tgt_poll_group_000", 00:18:21.264 "listen_address": { 00:18:21.264 "trtype": "TCP", 00:18:21.264 "adrfam": "IPv4", 00:18:21.264 "traddr": "10.0.0.2", 00:18:21.264 "trsvcid": "4420" 00:18:21.264 }, 00:18:21.264 "peer_address": { 00:18:21.264 "trtype": "TCP", 00:18:21.264 "adrfam": "IPv4", 00:18:21.264 "traddr": "10.0.0.1", 00:18:21.264 "trsvcid": "39636" 00:18:21.264 }, 00:18:21.264 "auth": { 00:18:21.264 "state": "completed", 00:18:21.264 "digest": "sha256", 00:18:21.264 "dhgroup": "ffdhe3072" 00:18:21.264 } 00:18:21.264 } 00:18:21.264 ]' 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.264 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.265 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.265 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.523 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.458 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.716 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.282 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.282 { 00:18:23.282 "cntlid": 21, 00:18:23.282 "qid": 0, 00:18:23.282 "state": "enabled", 00:18:23.282 "thread": "nvmf_tgt_poll_group_000", 00:18:23.282 "listen_address": { 00:18:23.282 "trtype": "TCP", 00:18:23.282 "adrfam": "IPv4", 00:18:23.282 "traddr": "10.0.0.2", 00:18:23.282 "trsvcid": "4420" 00:18:23.282 }, 00:18:23.282 "peer_address": { 00:18:23.282 "trtype": "TCP", 00:18:23.282 "adrfam": "IPv4", 00:18:23.282 "traddr": "10.0.0.1", 00:18:23.282 "trsvcid": "56518" 00:18:23.282 }, 00:18:23.282 "auth": { 00:18:23.282 "state": "completed", 00:18:23.282 "digest": "sha256", 00:18:23.282 "dhgroup": "ffdhe3072" 00:18:23.282 } 00:18:23.282 } 00:18:23.282 ]' 00:18:23.282 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.541 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.807 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.774 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.033 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.291 00:18:25.291 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.291 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.291 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.548 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.548 { 00:18:25.548 "cntlid": 23, 00:18:25.548 "qid": 0, 00:18:25.548 "state": "enabled", 00:18:25.548 "thread": "nvmf_tgt_poll_group_000", 00:18:25.548 "listen_address": { 00:18:25.548 "trtype": "TCP", 00:18:25.549 "adrfam": "IPv4", 00:18:25.549 "traddr": "10.0.0.2", 00:18:25.549 "trsvcid": "4420" 00:18:25.549 }, 00:18:25.549 "peer_address": { 00:18:25.549 "trtype": "TCP", 00:18:25.549 "adrfam": "IPv4", 00:18:25.549 "traddr": "10.0.0.1", 00:18:25.549 "trsvcid": "56542" 00:18:25.549 }, 00:18:25.549 "auth": { 00:18:25.549 "state": "completed", 00:18:25.549 "digest": "sha256", 00:18:25.549 "dhgroup": "ffdhe3072" 00:18:25.549 } 00:18:25.549 } 00:18:25.549 ]' 00:18:25.549 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.549 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.549 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.806 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.806 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.806 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.806 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.806 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.063 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.996 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.254 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.511 00:18:27.769 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.769 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.769 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.028 { 00:18:28.028 "cntlid": 25, 00:18:28.028 "qid": 0, 00:18:28.028 "state": "enabled", 00:18:28.028 "thread": "nvmf_tgt_poll_group_000", 00:18:28.028 "listen_address": { 00:18:28.028 "trtype": "TCP", 00:18:28.028 "adrfam": "IPv4", 00:18:28.028 "traddr": "10.0.0.2", 00:18:28.028 "trsvcid": "4420" 00:18:28.028 }, 00:18:28.028 "peer_address": { 00:18:28.028 "trtype": "TCP", 00:18:28.028 "adrfam": "IPv4", 00:18:28.028 "traddr": "10.0.0.1", 00:18:28.028 "trsvcid": "56580" 00:18:28.028 }, 00:18:28.028 "auth": { 00:18:28.028 "state": "completed", 00:18:28.028 "digest": "sha256", 00:18:28.028 "dhgroup": "ffdhe4096" 00:18:28.028 } 00:18:28.028 } 00:18:28.028 ]' 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.028 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.028 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.028 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.028 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.028 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.028 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.286 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.220 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.478 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.479 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.044 00:18:30.045 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.045 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.045 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.303 { 00:18:30.303 "cntlid": 27, 00:18:30.303 "qid": 0, 00:18:30.303 "state": "enabled", 00:18:30.303 "thread": "nvmf_tgt_poll_group_000", 00:18:30.303 "listen_address": { 00:18:30.303 "trtype": "TCP", 00:18:30.303 "adrfam": "IPv4", 00:18:30.303 "traddr": "10.0.0.2", 00:18:30.303 "trsvcid": "4420" 00:18:30.303 }, 00:18:30.303 "peer_address": { 00:18:30.303 "trtype": "TCP", 00:18:30.303 "adrfam": "IPv4", 00:18:30.303 "traddr": "10.0.0.1", 00:18:30.303 "trsvcid": "56612" 00:18:30.303 }, 00:18:30.303 "auth": { 00:18:30.303 "state": "completed", 00:18:30.303 "digest": "sha256", 00:18:30.303 "dhgroup": "ffdhe4096" 00:18:30.303 } 00:18:30.303 } 00:18:30.303 ]' 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.303 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.561 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.495 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.496 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.496 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.754 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.321 00:18:32.321 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.321 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.321 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.578 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.578 { 00:18:32.578 "cntlid": 29, 00:18:32.578 "qid": 0, 00:18:32.578 "state": "enabled", 00:18:32.578 "thread": "nvmf_tgt_poll_group_000", 00:18:32.578 "listen_address": { 00:18:32.578 "trtype": "TCP", 00:18:32.578 "adrfam": "IPv4", 00:18:32.578 "traddr": "10.0.0.2", 00:18:32.578 "trsvcid": "4420" 00:18:32.578 }, 00:18:32.578 "peer_address": { 00:18:32.578 "trtype": "TCP", 00:18:32.578 "adrfam": "IPv4", 00:18:32.578 "traddr": "10.0.0.1", 00:18:32.578 "trsvcid": "46450" 00:18:32.578 }, 00:18:32.579 "auth": { 00:18:32.579 "state": "completed", 00:18:32.579 "digest": "sha256", 00:18:32.579 "dhgroup": "ffdhe4096" 00:18:32.579 } 00:18:32.579 } 00:18:32.579 ]' 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.579 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.836 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.768 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:34.025 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:34.025 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.025 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.026 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.592 00:18:34.592 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.592 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.592 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.850 { 00:18:34.850 "cntlid": 31, 00:18:34.850 "qid": 0, 00:18:34.850 "state": "enabled", 00:18:34.850 "thread": "nvmf_tgt_poll_group_000", 00:18:34.850 "listen_address": { 00:18:34.850 "trtype": "TCP", 00:18:34.850 "adrfam": "IPv4", 00:18:34.850 "traddr": "10.0.0.2", 00:18:34.850 "trsvcid": "4420" 00:18:34.850 }, 00:18:34.850 "peer_address": { 00:18:34.850 "trtype": "TCP", 00:18:34.850 "adrfam": "IPv4", 00:18:34.850 "traddr": "10.0.0.1", 00:18:34.850 "trsvcid": "46476" 00:18:34.850 }, 00:18:34.850 "auth": { 00:18:34.850 "state": "completed", 00:18:34.850 "digest": "sha256", 00:18:34.850 "dhgroup": "ffdhe4096" 00:18:34.850 } 00:18:34.850 } 00:18:34.850 ]' 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.850 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.108 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.039 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.296 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.861 00:18:36.861 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.861 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.861 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.119 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.120 { 00:18:37.120 "cntlid": 33, 00:18:37.120 "qid": 0, 00:18:37.120 "state": "enabled", 00:18:37.120 "thread": "nvmf_tgt_poll_group_000", 00:18:37.120 "listen_address": { 00:18:37.120 "trtype": "TCP", 00:18:37.120 "adrfam": "IPv4", 00:18:37.120 "traddr": "10.0.0.2", 00:18:37.120 "trsvcid": "4420" 00:18:37.120 }, 00:18:37.120 "peer_address": { 00:18:37.120 "trtype": "TCP", 00:18:37.120 "adrfam": "IPv4", 00:18:37.120 "traddr": "10.0.0.1", 00:18:37.120 "trsvcid": "46502" 00:18:37.120 }, 00:18:37.120 "auth": { 00:18:37.120 "state": "completed", 00:18:37.120 "digest": "sha256", 00:18:37.120 "dhgroup": "ffdhe6144" 00:18:37.120 } 00:18:37.120 } 00:18:37.120 ]' 00:18:37.120 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.377 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.635 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.568 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.826 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.391 00:18:39.391 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.391 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.391 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.650 { 00:18:39.650 "cntlid": 35, 00:18:39.650 "qid": 0, 00:18:39.650 "state": "enabled", 00:18:39.650 "thread": "nvmf_tgt_poll_group_000", 00:18:39.650 "listen_address": { 00:18:39.650 "trtype": "TCP", 00:18:39.650 "adrfam": "IPv4", 00:18:39.650 "traddr": "10.0.0.2", 00:18:39.650 "trsvcid": "4420" 00:18:39.650 }, 00:18:39.650 "peer_address": { 00:18:39.650 "trtype": "TCP", 00:18:39.650 "adrfam": "IPv4", 00:18:39.650 "traddr": "10.0.0.1", 00:18:39.650 "trsvcid": "46524" 00:18:39.650 }, 00:18:39.650 "auth": { 00:18:39.650 "state": "completed", 00:18:39.650 "digest": "sha256", 00:18:39.650 "dhgroup": "ffdhe6144" 00:18:39.650 } 00:18:39.650 } 00:18:39.650 ]' 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.650 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.907 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.907 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.907 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.191 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.130 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.388 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.389 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.389 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.389 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.389 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.389 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.955 00:18:41.955 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.955 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.955 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.213 { 00:18:42.213 "cntlid": 37, 00:18:42.213 "qid": 0, 00:18:42.213 "state": "enabled", 00:18:42.213 "thread": "nvmf_tgt_poll_group_000", 00:18:42.213 "listen_address": { 00:18:42.213 "trtype": "TCP", 00:18:42.213 "adrfam": "IPv4", 00:18:42.213 "traddr": "10.0.0.2", 00:18:42.213 "trsvcid": "4420" 00:18:42.213 }, 00:18:42.213 "peer_address": { 00:18:42.213 "trtype": "TCP", 00:18:42.213 "adrfam": "IPv4", 00:18:42.213 "traddr": "10.0.0.1", 00:18:42.213 "trsvcid": "41970" 00:18:42.213 }, 00:18:42.213 "auth": { 00:18:42.213 "state": "completed", 00:18:42.213 "digest": "sha256", 00:18:42.213 "dhgroup": "ffdhe6144" 00:18:42.213 } 00:18:42.213 } 00:18:42.213 ]' 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.213 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.471 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:43.405 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.405 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.405 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.405 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.405 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.406 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.406 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:43.406 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.664 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.230 00:18:44.230 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.230 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.230 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.489 { 00:18:44.489 "cntlid": 39, 00:18:44.489 "qid": 0, 00:18:44.489 "state": "enabled", 00:18:44.489 "thread": "nvmf_tgt_poll_group_000", 00:18:44.489 "listen_address": { 00:18:44.489 "trtype": "TCP", 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.2", 00:18:44.489 "trsvcid": "4420" 00:18:44.489 }, 00:18:44.489 "peer_address": { 00:18:44.489 "trtype": "TCP", 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.1", 00:18:44.489 "trsvcid": "42012" 00:18:44.489 }, 00:18:44.489 "auth": { 00:18:44.489 "state": "completed", 00:18:44.489 "digest": "sha256", 00:18:44.489 "dhgroup": "ffdhe6144" 00:18:44.489 } 00:18:44.489 } 00:18:44.489 ]' 00:18:44.489 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.747 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.013 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.958 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.959 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.959 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.216 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.217 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.148 00:18:47.148 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.149 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.149 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.406 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.406 { 00:18:47.406 "cntlid": 41, 00:18:47.406 "qid": 0, 00:18:47.406 "state": "enabled", 00:18:47.406 "thread": "nvmf_tgt_poll_group_000", 00:18:47.406 "listen_address": { 00:18:47.406 "trtype": "TCP", 00:18:47.406 "adrfam": "IPv4", 00:18:47.406 "traddr": "10.0.0.2", 00:18:47.406 "trsvcid": "4420" 00:18:47.406 }, 00:18:47.406 "peer_address": { 00:18:47.406 "trtype": "TCP", 00:18:47.406 "adrfam": "IPv4", 00:18:47.406 "traddr": "10.0.0.1", 00:18:47.406 "trsvcid": "42030" 00:18:47.406 }, 00:18:47.406 "auth": { 00:18:47.406 "state": "completed", 00:18:47.406 "digest": "sha256", 00:18:47.406 "dhgroup": "ffdhe8192" 00:18:47.406 } 00:18:47.406 } 00:18:47.407 ]' 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.407 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.664 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.597 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.855 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.788 00:18:49.788 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.788 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.788 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.046 { 00:18:50.046 "cntlid": 43, 00:18:50.046 "qid": 0, 00:18:50.046 "state": "enabled", 00:18:50.046 "thread": "nvmf_tgt_poll_group_000", 00:18:50.046 "listen_address": { 00:18:50.046 "trtype": "TCP", 00:18:50.046 "adrfam": "IPv4", 00:18:50.046 "traddr": "10.0.0.2", 00:18:50.046 "trsvcid": "4420" 00:18:50.046 }, 00:18:50.046 "peer_address": { 00:18:50.046 "trtype": "TCP", 00:18:50.046 "adrfam": "IPv4", 00:18:50.046 "traddr": "10.0.0.1", 00:18:50.046 "trsvcid": "42054" 00:18:50.046 }, 00:18:50.046 "auth": { 00:18:50.046 "state": "completed", 00:18:50.046 "digest": "sha256", 00:18:50.046 "dhgroup": "ffdhe8192" 00:18:50.046 } 00:18:50.046 } 00:18:50.046 ]' 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.046 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.304 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.304 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.304 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.304 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.304 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.562 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:51.495 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.753 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.687 00:18:52.687 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.687 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.687 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.945 { 00:18:52.945 "cntlid": 45, 00:18:52.945 "qid": 0, 00:18:52.945 "state": "enabled", 00:18:52.945 "thread": "nvmf_tgt_poll_group_000", 00:18:52.945 "listen_address": { 00:18:52.945 "trtype": "TCP", 00:18:52.945 "adrfam": "IPv4", 00:18:52.945 "traddr": "10.0.0.2", 00:18:52.945 "trsvcid": "4420" 00:18:52.945 }, 00:18:52.945 "peer_address": { 00:18:52.945 "trtype": "TCP", 00:18:52.945 "adrfam": "IPv4", 00:18:52.945 "traddr": "10.0.0.1", 00:18:52.945 "trsvcid": "46758" 00:18:52.945 }, 00:18:52.945 "auth": { 00:18:52.945 "state": "completed", 00:18:52.945 "digest": "sha256", 00:18:52.945 "dhgroup": "ffdhe8192" 00:18:52.945 } 00:18:52.945 } 00:18:52.945 ]' 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.945 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.204 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:54.138 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.396 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.397 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.397 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.330 00:18:55.330 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.330 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.330 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.587 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.588 { 00:18:55.588 "cntlid": 47, 00:18:55.588 "qid": 0, 00:18:55.588 "state": "enabled", 00:18:55.588 "thread": "nvmf_tgt_poll_group_000", 00:18:55.588 "listen_address": { 00:18:55.588 "trtype": "TCP", 00:18:55.588 "adrfam": "IPv4", 00:18:55.588 "traddr": "10.0.0.2", 00:18:55.588 "trsvcid": "4420" 00:18:55.588 }, 00:18:55.588 "peer_address": { 00:18:55.588 "trtype": "TCP", 00:18:55.588 "adrfam": "IPv4", 00:18:55.588 "traddr": "10.0.0.1", 00:18:55.588 "trsvcid": "46788" 00:18:55.588 }, 00:18:55.588 "auth": { 00:18:55.588 "state": "completed", 00:18:55.588 "digest": "sha256", 00:18:55.588 "dhgroup": "ffdhe8192" 00:18:55.588 } 00:18:55.588 } 00:18:55.588 ]' 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.588 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.845 02:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:18:56.777 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.040 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.330 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.587 00:18:57.587 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.587 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.587 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.844 { 00:18:57.844 "cntlid": 49, 00:18:57.844 "qid": 0, 00:18:57.844 "state": "enabled", 00:18:57.844 "thread": "nvmf_tgt_poll_group_000", 00:18:57.844 "listen_address": { 00:18:57.844 "trtype": "TCP", 00:18:57.844 "adrfam": "IPv4", 00:18:57.844 "traddr": "10.0.0.2", 00:18:57.844 "trsvcid": "4420" 00:18:57.844 }, 00:18:57.844 "peer_address": { 00:18:57.844 "trtype": "TCP", 00:18:57.844 "adrfam": "IPv4", 00:18:57.844 "traddr": "10.0.0.1", 00:18:57.844 "trsvcid": "46824" 00:18:57.844 }, 00:18:57.844 "auth": { 00:18:57.844 "state": "completed", 00:18:57.844 "digest": "sha384", 00:18:57.844 "dhgroup": "null" 00:18:57.844 } 00:18:57.844 } 00:18:57.844 ]' 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.844 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.101 02:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.033 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.291 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.856 00:18:59.856 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.856 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.856 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.856 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.856 { 00:18:59.856 "cntlid": 51, 00:18:59.856 "qid": 0, 00:18:59.856 "state": "enabled", 00:18:59.856 "thread": "nvmf_tgt_poll_group_000", 00:18:59.856 "listen_address": { 00:18:59.856 "trtype": "TCP", 00:18:59.856 "adrfam": "IPv4", 00:18:59.856 "traddr": "10.0.0.2", 00:18:59.856 "trsvcid": "4420" 00:18:59.856 }, 00:18:59.856 "peer_address": { 00:18:59.856 "trtype": "TCP", 00:18:59.856 "adrfam": "IPv4", 00:18:59.856 "traddr": "10.0.0.1", 00:18:59.856 "trsvcid": "46842" 00:18:59.856 }, 00:18:59.856 "auth": { 00:18:59.856 "state": "completed", 00:18:59.856 "digest": "sha384", 00:18:59.856 "dhgroup": "null" 00:18:59.856 } 00:18:59.856 } 00:18:59.856 ]' 00:18:59.856 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.114 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.371 02:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.304 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.562 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.820 00:19:01.820 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.820 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.820 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.078 { 00:19:02.078 "cntlid": 53, 00:19:02.078 "qid": 0, 00:19:02.078 "state": "enabled", 00:19:02.078 "thread": "nvmf_tgt_poll_group_000", 00:19:02.078 "listen_address": { 00:19:02.078 "trtype": "TCP", 00:19:02.078 "adrfam": "IPv4", 00:19:02.078 "traddr": "10.0.0.2", 00:19:02.078 "trsvcid": "4420" 00:19:02.078 }, 00:19:02.078 "peer_address": { 00:19:02.078 "trtype": "TCP", 00:19:02.078 "adrfam": "IPv4", 00:19:02.078 "traddr": "10.0.0.1", 00:19:02.078 "trsvcid": "42452" 00:19:02.078 }, 00:19:02.078 "auth": { 00:19:02.078 "state": "completed", 00:19:02.078 "digest": "sha384", 00:19:02.078 "dhgroup": "null" 00:19:02.078 } 00:19:02.078 } 00:19:02.078 ]' 00:19:02.078 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.336 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.594 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.526 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.784 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.350 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.350 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.350 { 00:19:04.350 "cntlid": 55, 00:19:04.350 "qid": 0, 00:19:04.350 "state": "enabled", 00:19:04.350 "thread": "nvmf_tgt_poll_group_000", 00:19:04.350 "listen_address": { 00:19:04.350 "trtype": "TCP", 00:19:04.350 "adrfam": "IPv4", 00:19:04.350 "traddr": "10.0.0.2", 00:19:04.350 "trsvcid": "4420" 00:19:04.350 }, 00:19:04.350 "peer_address": { 00:19:04.350 "trtype": "TCP", 00:19:04.350 "adrfam": "IPv4", 00:19:04.350 "traddr": "10.0.0.1", 00:19:04.350 "trsvcid": "42486" 00:19:04.350 }, 00:19:04.350 "auth": { 00:19:04.350 "state": "completed", 00:19:04.350 "digest": "sha384", 00:19:04.350 "dhgroup": "null" 00:19:04.350 } 00:19:04.350 } 00:19:04.350 ]' 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.607 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.865 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.799 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.057 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.314 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.572 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.829 { 00:19:06.829 "cntlid": 57, 00:19:06.829 "qid": 0, 00:19:06.829 "state": "enabled", 00:19:06.829 "thread": "nvmf_tgt_poll_group_000", 00:19:06.829 "listen_address": { 00:19:06.829 "trtype": "TCP", 00:19:06.829 "adrfam": "IPv4", 00:19:06.829 "traddr": "10.0.0.2", 00:19:06.829 "trsvcid": "4420" 00:19:06.829 }, 00:19:06.829 "peer_address": { 00:19:06.829 "trtype": "TCP", 00:19:06.829 "adrfam": "IPv4", 00:19:06.829 "traddr": "10.0.0.1", 00:19:06.829 "trsvcid": "42516" 00:19:06.829 }, 00:19:06.829 "auth": { 00:19:06.829 "state": "completed", 00:19:06.829 "digest": "sha384", 00:19:06.829 "dhgroup": "ffdhe2048" 00:19:06.829 } 00:19:06.829 } 00:19:06.829 ]' 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.829 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.830 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.830 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.830 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.087 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.019 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.020 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.277 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.535 00:19:08.535 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.535 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.535 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.793 { 00:19:08.793 "cntlid": 59, 00:19:08.793 "qid": 0, 00:19:08.793 "state": "enabled", 00:19:08.793 "thread": "nvmf_tgt_poll_group_000", 00:19:08.793 "listen_address": { 00:19:08.793 "trtype": "TCP", 00:19:08.793 "adrfam": "IPv4", 00:19:08.793 "traddr": "10.0.0.2", 00:19:08.793 "trsvcid": "4420" 00:19:08.793 }, 00:19:08.793 "peer_address": { 00:19:08.793 "trtype": "TCP", 00:19:08.793 "adrfam": "IPv4", 00:19:08.793 "traddr": "10.0.0.1", 00:19:08.793 "trsvcid": "42538" 00:19:08.793 }, 00:19:08.793 "auth": { 00:19:08.793 "state": "completed", 00:19:08.793 "digest": "sha384", 00:19:08.793 "dhgroup": "ffdhe2048" 00:19:08.793 } 00:19:08.793 } 00:19:08.793 ]' 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.793 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.051 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.051 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.051 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.051 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.051 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.307 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.239 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.497 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.754 00:19:10.754 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.754 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.754 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.012 { 00:19:11.012 "cntlid": 61, 00:19:11.012 "qid": 0, 00:19:11.012 "state": "enabled", 00:19:11.012 "thread": "nvmf_tgt_poll_group_000", 00:19:11.012 "listen_address": { 00:19:11.012 "trtype": "TCP", 00:19:11.012 "adrfam": "IPv4", 00:19:11.012 "traddr": "10.0.0.2", 00:19:11.012 "trsvcid": "4420" 00:19:11.012 }, 00:19:11.012 "peer_address": { 00:19:11.012 "trtype": "TCP", 00:19:11.012 "adrfam": "IPv4", 00:19:11.012 "traddr": "10.0.0.1", 00:19:11.012 "trsvcid": "42572" 00:19:11.012 }, 00:19:11.012 "auth": { 00:19:11.012 "state": "completed", 00:19:11.012 "digest": "sha384", 00:19:11.012 "dhgroup": "ffdhe2048" 00:19:11.012 } 00:19:11.012 } 00:19:11.012 ]' 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.012 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.269 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.269 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.269 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.269 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.269 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.527 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.460 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.718 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.977 00:19:12.977 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.977 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.978 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.269 { 00:19:13.269 "cntlid": 63, 00:19:13.269 "qid": 0, 00:19:13.269 "state": "enabled", 00:19:13.269 "thread": "nvmf_tgt_poll_group_000", 00:19:13.269 "listen_address": { 00:19:13.269 "trtype": "TCP", 00:19:13.269 "adrfam": "IPv4", 00:19:13.269 "traddr": "10.0.0.2", 00:19:13.269 "trsvcid": "4420" 00:19:13.269 }, 00:19:13.269 "peer_address": { 00:19:13.269 "trtype": "TCP", 00:19:13.269 "adrfam": "IPv4", 00:19:13.269 "traddr": "10.0.0.1", 00:19:13.269 "trsvcid": "56158" 00:19:13.269 }, 00:19:13.269 "auth": { 00:19:13.269 "state": "completed", 00:19:13.269 "digest": "sha384", 00:19:13.269 "dhgroup": "ffdhe2048" 00:19:13.269 } 00:19:13.269 } 00:19:13.269 ]' 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.269 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.527 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.527 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.527 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.527 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:14.460 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.717 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.973 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.229 00:19:15.229 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.229 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.229 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.486 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.486 { 00:19:15.486 "cntlid": 65, 00:19:15.486 "qid": 0, 00:19:15.486 "state": "enabled", 00:19:15.486 "thread": "nvmf_tgt_poll_group_000", 00:19:15.486 "listen_address": { 00:19:15.486 "trtype": "TCP", 00:19:15.486 "adrfam": "IPv4", 00:19:15.486 "traddr": "10.0.0.2", 00:19:15.486 "trsvcid": "4420" 00:19:15.486 }, 00:19:15.486 "peer_address": { 00:19:15.486 "trtype": "TCP", 00:19:15.486 "adrfam": "IPv4", 00:19:15.486 "traddr": "10.0.0.1", 00:19:15.487 "trsvcid": "56174" 00:19:15.487 }, 00:19:15.487 "auth": { 00:19:15.487 "state": "completed", 00:19:15.487 "digest": "sha384", 00:19:15.487 "dhgroup": "ffdhe3072" 00:19:15.487 } 00:19:15.487 } 00:19:15.487 ]' 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.487 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.744 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.116 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.116 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:17.116 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.116 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.117 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.375 00:19:17.375 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.375 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.375 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.632 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.632 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.632 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.632 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.633 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.633 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.633 { 00:19:17.633 "cntlid": 67, 00:19:17.633 "qid": 0, 00:19:17.633 "state": "enabled", 00:19:17.633 "thread": "nvmf_tgt_poll_group_000", 00:19:17.633 "listen_address": { 00:19:17.633 "trtype": "TCP", 00:19:17.633 "adrfam": "IPv4", 00:19:17.633 "traddr": "10.0.0.2", 00:19:17.633 "trsvcid": "4420" 00:19:17.633 }, 00:19:17.633 "peer_address": { 00:19:17.633 "trtype": "TCP", 00:19:17.633 "adrfam": "IPv4", 00:19:17.633 "traddr": "10.0.0.1", 00:19:17.633 "trsvcid": "56188" 00:19:17.633 }, 00:19:17.633 "auth": { 00:19:17.633 "state": "completed", 00:19:17.633 "digest": "sha384", 00:19:17.633 "dhgroup": "ffdhe3072" 00:19:17.633 } 00:19:17.633 } 00:19:17.633 ]' 00:19:17.633 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.891 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.149 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.083 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.341 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.597 00:19:19.854 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.854 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.854 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.112 { 00:19:20.112 "cntlid": 69, 00:19:20.112 "qid": 0, 00:19:20.112 "state": "enabled", 00:19:20.112 "thread": "nvmf_tgt_poll_group_000", 00:19:20.112 "listen_address": { 00:19:20.112 "trtype": "TCP", 00:19:20.112 "adrfam": "IPv4", 00:19:20.112 "traddr": "10.0.0.2", 00:19:20.112 "trsvcid": "4420" 00:19:20.112 }, 00:19:20.112 "peer_address": { 00:19:20.112 "trtype": "TCP", 00:19:20.112 "adrfam": "IPv4", 00:19:20.112 "traddr": "10.0.0.1", 00:19:20.112 "trsvcid": "56214" 00:19:20.112 }, 00:19:20.112 "auth": { 00:19:20.112 "state": "completed", 00:19:20.112 "digest": "sha384", 00:19:20.112 "dhgroup": "ffdhe3072" 00:19:20.112 } 00:19:20.112 } 00:19:20.112 ]' 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.112 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.370 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.301 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.558 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.123 00:19:22.123 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.123 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.123 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.380 { 00:19:22.380 "cntlid": 71, 00:19:22.380 "qid": 0, 00:19:22.380 "state": "enabled", 00:19:22.380 "thread": "nvmf_tgt_poll_group_000", 00:19:22.380 "listen_address": { 00:19:22.380 "trtype": "TCP", 00:19:22.380 "adrfam": "IPv4", 00:19:22.380 "traddr": "10.0.0.2", 00:19:22.380 "trsvcid": "4420" 00:19:22.380 }, 00:19:22.380 "peer_address": { 00:19:22.380 "trtype": "TCP", 00:19:22.380 "adrfam": "IPv4", 00:19:22.380 "traddr": "10.0.0.1", 00:19:22.380 "trsvcid": "60454" 00:19:22.380 }, 00:19:22.380 "auth": { 00:19:22.380 "state": "completed", 00:19:22.380 "digest": "sha384", 00:19:22.380 "dhgroup": "ffdhe3072" 00:19:22.380 } 00:19:22.380 } 00:19:22.380 ]' 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.380 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.381 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.381 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.638 02:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:23.570 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.570 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.570 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.571 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.828 02:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.394 00:19:24.394 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.394 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.394 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.652 { 00:19:24.652 "cntlid": 73, 00:19:24.652 "qid": 0, 00:19:24.652 "state": "enabled", 00:19:24.652 "thread": "nvmf_tgt_poll_group_000", 00:19:24.652 "listen_address": { 00:19:24.652 "trtype": "TCP", 00:19:24.652 "adrfam": "IPv4", 00:19:24.652 "traddr": "10.0.0.2", 00:19:24.652 "trsvcid": "4420" 00:19:24.652 }, 00:19:24.652 "peer_address": { 00:19:24.652 "trtype": "TCP", 00:19:24.652 "adrfam": "IPv4", 00:19:24.652 "traddr": "10.0.0.1", 00:19:24.652 "trsvcid": "60478" 00:19:24.652 }, 00:19:24.652 "auth": { 00:19:24.652 "state": "completed", 00:19:24.652 "digest": "sha384", 00:19:24.652 "dhgroup": "ffdhe4096" 00:19:24.652 } 00:19:24.652 } 00:19:24.652 ]' 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.652 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.910 02:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.843 02:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.100 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.663 00:19:26.663 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.663 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.663 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.920 { 00:19:26.920 "cntlid": 75, 00:19:26.920 "qid": 0, 00:19:26.920 "state": "enabled", 00:19:26.920 "thread": "nvmf_tgt_poll_group_000", 00:19:26.920 "listen_address": { 00:19:26.920 "trtype": "TCP", 00:19:26.920 "adrfam": "IPv4", 00:19:26.920 "traddr": "10.0.0.2", 00:19:26.920 "trsvcid": "4420" 00:19:26.920 }, 00:19:26.920 "peer_address": { 00:19:26.920 "trtype": "TCP", 00:19:26.920 "adrfam": "IPv4", 00:19:26.920 "traddr": "10.0.0.1", 00:19:26.920 "trsvcid": "60508" 00:19:26.920 }, 00:19:26.920 "auth": { 00:19:26.920 "state": "completed", 00:19:26.920 "digest": "sha384", 00:19:26.920 "dhgroup": "ffdhe4096" 00:19:26.920 } 00:19:26.920 } 00:19:26.920 ]' 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.920 02:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.920 02:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.920 02:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.920 02:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.177 02:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.107 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.673 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.931 00:19:28.931 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.931 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.931 02:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.190 { 00:19:29.190 "cntlid": 77, 00:19:29.190 "qid": 0, 00:19:29.190 "state": "enabled", 00:19:29.190 "thread": "nvmf_tgt_poll_group_000", 00:19:29.190 "listen_address": { 00:19:29.190 "trtype": "TCP", 00:19:29.190 "adrfam": "IPv4", 00:19:29.190 "traddr": "10.0.0.2", 00:19:29.190 "trsvcid": "4420" 00:19:29.190 }, 00:19:29.190 "peer_address": { 00:19:29.190 "trtype": "TCP", 00:19:29.190 "adrfam": "IPv4", 00:19:29.190 "traddr": "10.0.0.1", 00:19:29.190 "trsvcid": "60534" 00:19:29.190 }, 00:19:29.190 "auth": { 00:19:29.190 "state": "completed", 00:19:29.190 "digest": "sha384", 00:19:29.190 "dhgroup": "ffdhe4096" 00:19:29.190 } 00:19:29.190 } 00:19:29.190 ]' 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.190 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.783 02:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.720 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.978 02:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.236 00:19:31.236 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.236 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.236 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.494 { 00:19:31.494 "cntlid": 79, 00:19:31.494 "qid": 0, 00:19:31.494 "state": "enabled", 00:19:31.494 "thread": "nvmf_tgt_poll_group_000", 00:19:31.494 "listen_address": { 00:19:31.494 "trtype": "TCP", 00:19:31.494 "adrfam": "IPv4", 00:19:31.494 "traddr": "10.0.0.2", 00:19:31.494 "trsvcid": "4420" 00:19:31.494 }, 00:19:31.494 "peer_address": { 00:19:31.494 "trtype": "TCP", 00:19:31.494 "adrfam": "IPv4", 00:19:31.494 "traddr": "10.0.0.1", 00:19:31.494 "trsvcid": "60554" 00:19:31.494 }, 00:19:31.494 "auth": { 00:19:31.494 "state": "completed", 00:19:31.494 "digest": "sha384", 00:19:31.494 "dhgroup": "ffdhe4096" 00:19:31.494 } 00:19:31.494 } 00:19:31.494 ]' 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.494 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.752 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.752 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.752 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.011 02:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.945 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.203 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.204 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.770 00:19:33.770 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.770 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.770 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.028 { 00:19:34.028 "cntlid": 81, 00:19:34.028 "qid": 0, 00:19:34.028 "state": "enabled", 00:19:34.028 "thread": "nvmf_tgt_poll_group_000", 00:19:34.028 "listen_address": { 00:19:34.028 "trtype": "TCP", 00:19:34.028 "adrfam": "IPv4", 00:19:34.028 "traddr": "10.0.0.2", 00:19:34.028 "trsvcid": "4420" 00:19:34.028 }, 00:19:34.028 "peer_address": { 00:19:34.028 "trtype": "TCP", 00:19:34.028 "adrfam": "IPv4", 00:19:34.028 "traddr": "10.0.0.1", 00:19:34.028 "trsvcid": "45172" 00:19:34.028 }, 00:19:34.028 "auth": { 00:19:34.028 "state": "completed", 00:19:34.028 "digest": "sha384", 00:19:34.028 "dhgroup": "ffdhe6144" 00:19:34.028 } 00:19:34.028 } 00:19:34.028 ]' 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.028 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.287 02:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:35.221 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.221 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.221 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.221 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.480 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.480 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.480 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.480 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.738 02:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.303 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.303 { 00:19:36.303 "cntlid": 83, 00:19:36.303 "qid": 0, 00:19:36.303 "state": "enabled", 00:19:36.303 "thread": "nvmf_tgt_poll_group_000", 00:19:36.303 "listen_address": { 00:19:36.303 "trtype": "TCP", 00:19:36.303 "adrfam": "IPv4", 00:19:36.303 "traddr": "10.0.0.2", 00:19:36.303 "trsvcid": "4420" 00:19:36.303 }, 00:19:36.303 "peer_address": { 00:19:36.303 "trtype": "TCP", 00:19:36.303 "adrfam": "IPv4", 00:19:36.303 "traddr": "10.0.0.1", 00:19:36.303 "trsvcid": "45196" 00:19:36.303 }, 00:19:36.303 "auth": { 00:19:36.303 "state": "completed", 00:19:36.303 "digest": "sha384", 00:19:36.303 "dhgroup": "ffdhe6144" 00:19:36.303 } 00:19:36.303 } 00:19:36.303 ]' 00:19:36.303 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.560 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.817 02:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.751 02:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.009 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.574 00:19:38.574 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.574 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.574 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.832 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.832 { 00:19:38.832 "cntlid": 85, 00:19:38.833 "qid": 0, 00:19:38.833 "state": "enabled", 00:19:38.833 "thread": "nvmf_tgt_poll_group_000", 00:19:38.833 "listen_address": { 00:19:38.833 "trtype": "TCP", 00:19:38.833 "adrfam": "IPv4", 00:19:38.833 "traddr": "10.0.0.2", 00:19:38.833 "trsvcid": "4420" 00:19:38.833 }, 00:19:38.833 "peer_address": { 00:19:38.833 "trtype": "TCP", 00:19:38.833 "adrfam": "IPv4", 00:19:38.833 "traddr": "10.0.0.1", 00:19:38.833 "trsvcid": "45226" 00:19:38.833 }, 00:19:38.833 "auth": { 00:19:38.833 "state": "completed", 00:19:38.833 "digest": "sha384", 00:19:38.833 "dhgroup": "ffdhe6144" 00:19:38.833 } 00:19:38.833 } 00:19:38.833 ]' 00:19:38.833 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.833 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.833 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.833 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.833 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.091 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.091 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.091 02:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.091 02:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.024 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.282 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.283 02:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.849 00:19:41.107 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.107 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.107 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.365 { 00:19:41.365 "cntlid": 87, 00:19:41.365 "qid": 0, 00:19:41.365 "state": "enabled", 00:19:41.365 "thread": "nvmf_tgt_poll_group_000", 00:19:41.365 "listen_address": { 00:19:41.365 "trtype": "TCP", 00:19:41.365 "adrfam": "IPv4", 00:19:41.365 "traddr": "10.0.0.2", 00:19:41.365 "trsvcid": "4420" 00:19:41.365 }, 00:19:41.365 "peer_address": { 00:19:41.365 "trtype": "TCP", 00:19:41.365 "adrfam": "IPv4", 00:19:41.365 "traddr": "10.0.0.1", 00:19:41.365 "trsvcid": "45246" 00:19:41.365 }, 00:19:41.365 "auth": { 00:19:41.365 "state": "completed", 00:19:41.365 "digest": "sha384", 00:19:41.365 "dhgroup": "ffdhe6144" 00:19:41.365 } 00:19:41.365 } 00:19:41.365 ]' 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.365 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.623 02:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.557 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.815 02:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.748 00:19:43.748 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.748 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.748 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.005 { 00:19:44.005 "cntlid": 89, 00:19:44.005 "qid": 0, 00:19:44.005 "state": "enabled", 00:19:44.005 "thread": "nvmf_tgt_poll_group_000", 00:19:44.005 "listen_address": { 00:19:44.005 "trtype": "TCP", 00:19:44.005 "adrfam": "IPv4", 00:19:44.005 "traddr": "10.0.0.2", 00:19:44.005 "trsvcid": "4420" 00:19:44.005 }, 00:19:44.005 "peer_address": { 00:19:44.005 "trtype": "TCP", 00:19:44.005 "adrfam": "IPv4", 00:19:44.005 "traddr": "10.0.0.1", 00:19:44.005 "trsvcid": "59496" 00:19:44.005 }, 00:19:44.005 "auth": { 00:19:44.005 "state": "completed", 00:19:44.005 "digest": "sha384", 00:19:44.005 "dhgroup": "ffdhe8192" 00:19:44.005 } 00:19:44.005 } 00:19:44.005 ]' 00:19:44.005 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.263 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.520 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.451 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.021 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.620 00:19:46.620 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.620 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.621 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.878 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.878 { 00:19:46.878 "cntlid": 91, 00:19:46.878 "qid": 0, 00:19:46.878 "state": "enabled", 00:19:46.878 "thread": "nvmf_tgt_poll_group_000", 00:19:46.878 "listen_address": { 00:19:46.878 "trtype": "TCP", 00:19:46.878 "adrfam": "IPv4", 00:19:46.878 "traddr": "10.0.0.2", 00:19:46.878 "trsvcid": "4420" 00:19:46.878 }, 00:19:46.878 "peer_address": { 00:19:46.878 "trtype": "TCP", 00:19:46.878 "adrfam": "IPv4", 00:19:46.878 "traddr": "10.0.0.1", 00:19:46.878 "trsvcid": "59530" 00:19:46.878 }, 00:19:46.878 "auth": { 00:19:46.878 "state": "completed", 00:19:46.878 "digest": "sha384", 00:19:46.878 "dhgroup": "ffdhe8192" 00:19:46.878 } 00:19:46.878 } 00:19:46.878 ]' 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.135 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.393 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.325 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.581 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.513 00:19:49.513 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.513 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.513 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.771 { 00:19:49.771 "cntlid": 93, 00:19:49.771 "qid": 0, 00:19:49.771 "state": "enabled", 00:19:49.771 "thread": "nvmf_tgt_poll_group_000", 00:19:49.771 "listen_address": { 00:19:49.771 "trtype": "TCP", 00:19:49.771 "adrfam": "IPv4", 00:19:49.771 "traddr": "10.0.0.2", 00:19:49.771 "trsvcid": "4420" 00:19:49.771 }, 00:19:49.771 "peer_address": { 00:19:49.771 "trtype": "TCP", 00:19:49.771 "adrfam": "IPv4", 00:19:49.771 "traddr": "10.0.0.1", 00:19:49.771 "trsvcid": "59562" 00:19:49.771 }, 00:19:49.771 "auth": { 00:19:49.771 "state": "completed", 00:19:49.771 "digest": "sha384", 00:19:49.771 "dhgroup": "ffdhe8192" 00:19:49.771 } 00:19:49.771 } 00:19:49.771 ]' 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.771 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.029 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.029 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.029 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.287 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.220 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.477 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.409 00:19:52.409 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.409 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.409 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.667 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.667 { 00:19:52.667 "cntlid": 95, 00:19:52.667 "qid": 0, 00:19:52.667 "state": "enabled", 00:19:52.668 "thread": "nvmf_tgt_poll_group_000", 00:19:52.668 "listen_address": { 00:19:52.668 "trtype": "TCP", 00:19:52.668 "adrfam": "IPv4", 00:19:52.668 "traddr": "10.0.0.2", 00:19:52.668 "trsvcid": "4420" 00:19:52.668 }, 00:19:52.668 "peer_address": { 00:19:52.668 "trtype": "TCP", 00:19:52.668 "adrfam": "IPv4", 00:19:52.668 "traddr": "10.0.0.1", 00:19:52.668 "trsvcid": "33746" 00:19:52.668 }, 00:19:52.668 "auth": { 00:19:52.668 "state": "completed", 00:19:52.668 "digest": "sha384", 00:19:52.668 "dhgroup": "ffdhe8192" 00:19:52.668 } 00:19:52.668 } 00:19:52.668 ]' 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.668 02:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.926 02:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:19:53.859 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.859 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.859 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.859 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.118 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.375 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.376 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.634 00:19:54.634 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.634 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.634 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.892 { 00:19:54.892 "cntlid": 97, 00:19:54.892 "qid": 0, 00:19:54.892 "state": "enabled", 00:19:54.892 "thread": "nvmf_tgt_poll_group_000", 00:19:54.892 "listen_address": { 00:19:54.892 "trtype": "TCP", 00:19:54.892 "adrfam": "IPv4", 00:19:54.892 "traddr": "10.0.0.2", 00:19:54.892 "trsvcid": "4420" 00:19:54.892 }, 00:19:54.892 "peer_address": { 00:19:54.892 "trtype": "TCP", 00:19:54.892 "adrfam": "IPv4", 00:19:54.892 "traddr": "10.0.0.1", 00:19:54.892 "trsvcid": "33782" 00:19:54.892 }, 00:19:54.892 "auth": { 00:19:54.892 "state": "completed", 00:19:54.892 "digest": "sha512", 00:19:54.892 "dhgroup": "null" 00:19:54.892 } 00:19:54.892 } 00:19:54.892 ]' 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.892 02:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.150 02:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.080 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.337 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:56.337 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.337 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.337 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.337 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.338 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.902 00:19:56.902 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.902 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.902 02:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.902 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.902 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.902 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.902 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.160 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.161 { 00:19:57.161 "cntlid": 99, 00:19:57.161 "qid": 0, 00:19:57.161 "state": "enabled", 00:19:57.161 "thread": "nvmf_tgt_poll_group_000", 00:19:57.161 "listen_address": { 00:19:57.161 "trtype": "TCP", 00:19:57.161 "adrfam": "IPv4", 00:19:57.161 "traddr": "10.0.0.2", 00:19:57.161 "trsvcid": "4420" 00:19:57.161 }, 00:19:57.161 "peer_address": { 00:19:57.161 "trtype": "TCP", 00:19:57.161 "adrfam": "IPv4", 00:19:57.161 "traddr": "10.0.0.1", 00:19:57.161 "trsvcid": "33806" 00:19:57.161 }, 00:19:57.161 "auth": { 00:19:57.161 "state": "completed", 00:19:57.161 "digest": "sha512", 00:19:57.161 "dhgroup": "null" 00:19:57.161 } 00:19:57.161 } 00:19:57.161 ]' 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.161 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.419 02:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:58.351 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.608 02:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.865 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.122 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.381 { 00:19:59.381 "cntlid": 101, 00:19:59.381 "qid": 0, 00:19:59.381 "state": "enabled", 00:19:59.381 "thread": "nvmf_tgt_poll_group_000", 00:19:59.381 "listen_address": { 00:19:59.381 "trtype": "TCP", 00:19:59.381 "adrfam": "IPv4", 00:19:59.381 "traddr": "10.0.0.2", 00:19:59.381 "trsvcid": "4420" 00:19:59.381 }, 00:19:59.381 "peer_address": { 00:19:59.381 "trtype": "TCP", 00:19:59.381 "adrfam": "IPv4", 00:19:59.381 "traddr": "10.0.0.1", 00:19:59.381 "trsvcid": "33844" 00:19:59.381 }, 00:19:59.381 "auth": { 00:19:59.381 "state": "completed", 00:19:59.381 "digest": "sha512", 00:19:59.381 "dhgroup": "null" 00:19:59.381 } 00:19:59.381 } 00:19:59.381 ]' 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.381 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.639 02:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.572 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:00.573 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.858 02:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.116 00:20:01.116 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.116 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.116 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.373 { 00:20:01.373 "cntlid": 103, 00:20:01.373 "qid": 0, 00:20:01.373 "state": "enabled", 00:20:01.373 "thread": "nvmf_tgt_poll_group_000", 00:20:01.373 "listen_address": { 00:20:01.373 "trtype": "TCP", 00:20:01.373 "adrfam": "IPv4", 00:20:01.373 "traddr": "10.0.0.2", 00:20:01.373 "trsvcid": "4420" 00:20:01.373 }, 00:20:01.373 "peer_address": { 00:20:01.373 "trtype": "TCP", 00:20:01.373 "adrfam": "IPv4", 00:20:01.373 "traddr": "10.0.0.1", 00:20:01.373 "trsvcid": "33874" 00:20:01.373 }, 00:20:01.373 "auth": { 00:20:01.373 "state": "completed", 00:20:01.373 "digest": "sha512", 00:20:01.373 "dhgroup": "null" 00:20:01.373 } 00:20:01.373 } 00:20:01.373 ]' 00:20:01.373 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.630 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.888 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.821 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.079 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.337 00:20:03.337 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.337 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.337 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.595 { 00:20:03.595 "cntlid": 105, 00:20:03.595 "qid": 0, 00:20:03.595 "state": "enabled", 00:20:03.595 "thread": "nvmf_tgt_poll_group_000", 00:20:03.595 "listen_address": { 00:20:03.595 "trtype": "TCP", 00:20:03.595 "adrfam": "IPv4", 00:20:03.595 "traddr": "10.0.0.2", 00:20:03.595 "trsvcid": "4420" 00:20:03.595 }, 00:20:03.595 "peer_address": { 00:20:03.595 "trtype": "TCP", 00:20:03.595 "adrfam": "IPv4", 00:20:03.595 "traddr": "10.0.0.1", 00:20:03.595 "trsvcid": "56590" 00:20:03.595 }, 00:20:03.595 "auth": { 00:20:03.595 "state": "completed", 00:20:03.595 "digest": "sha512", 00:20:03.595 "dhgroup": "ffdhe2048" 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ]' 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.595 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.852 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.852 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.852 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.852 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.852 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.110 02:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.041 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.042 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:05.042 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.297 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.553 00:20:05.553 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.553 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.553 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.810 { 00:20:05.810 "cntlid": 107, 00:20:05.810 "qid": 0, 00:20:05.810 "state": "enabled", 00:20:05.810 "thread": "nvmf_tgt_poll_group_000", 00:20:05.810 "listen_address": { 00:20:05.810 "trtype": "TCP", 00:20:05.810 "adrfam": "IPv4", 00:20:05.810 "traddr": "10.0.0.2", 00:20:05.810 "trsvcid": "4420" 00:20:05.810 }, 00:20:05.810 "peer_address": { 00:20:05.810 "trtype": "TCP", 00:20:05.810 "adrfam": "IPv4", 00:20:05.810 "traddr": "10.0.0.1", 00:20:05.810 "trsvcid": "56626" 00:20:05.810 }, 00:20:05.810 "auth": { 00:20:05.810 "state": "completed", 00:20:05.810 "digest": "sha512", 00:20:05.810 "dhgroup": "ffdhe2048" 00:20:05.810 } 00:20:05.810 } 00:20:05.810 ]' 00:20:05.810 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.067 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.067 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.067 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.067 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.067 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.067 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.067 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.324 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.252 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.508 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.765 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.765 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.765 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.022 00:20:08.022 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.022 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.022 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.279 { 00:20:08.279 "cntlid": 109, 00:20:08.279 "qid": 0, 00:20:08.279 "state": "enabled", 00:20:08.279 "thread": "nvmf_tgt_poll_group_000", 00:20:08.279 "listen_address": { 00:20:08.279 "trtype": "TCP", 00:20:08.279 "adrfam": "IPv4", 00:20:08.279 "traddr": "10.0.0.2", 00:20:08.279 "trsvcid": "4420" 00:20:08.279 }, 00:20:08.279 "peer_address": { 00:20:08.279 "trtype": "TCP", 00:20:08.279 "adrfam": "IPv4", 00:20:08.279 "traddr": "10.0.0.1", 00:20:08.279 "trsvcid": "56644" 00:20:08.279 }, 00:20:08.279 "auth": { 00:20:08.279 "state": "completed", 00:20:08.279 "digest": "sha512", 00:20:08.279 "dhgroup": "ffdhe2048" 00:20:08.279 } 00:20:08.279 } 00:20:08.279 ]' 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.279 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.536 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:09.468 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.725 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.983 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.241 00:20:10.241 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.241 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.241 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.498 { 00:20:10.498 "cntlid": 111, 00:20:10.498 "qid": 0, 00:20:10.498 "state": "enabled", 00:20:10.498 "thread": "nvmf_tgt_poll_group_000", 00:20:10.498 "listen_address": { 00:20:10.498 "trtype": "TCP", 00:20:10.498 "adrfam": "IPv4", 00:20:10.498 "traddr": "10.0.0.2", 00:20:10.498 "trsvcid": "4420" 00:20:10.498 }, 00:20:10.498 "peer_address": { 00:20:10.498 "trtype": "TCP", 00:20:10.498 "adrfam": "IPv4", 00:20:10.498 "traddr": "10.0.0.1", 00:20:10.498 "trsvcid": "56678" 00:20:10.498 }, 00:20:10.498 "auth": { 00:20:10.498 "state": "completed", 00:20:10.498 "digest": "sha512", 00:20:10.498 "dhgroup": "ffdhe2048" 00:20:10.498 } 00:20:10.498 } 00:20:10.498 ]' 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.498 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.062 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.995 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.995 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.560 00:20:12.560 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.560 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.560 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.817 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.818 { 00:20:12.818 "cntlid": 113, 00:20:12.818 "qid": 0, 00:20:12.818 "state": "enabled", 00:20:12.818 "thread": "nvmf_tgt_poll_group_000", 00:20:12.818 "listen_address": { 00:20:12.818 "trtype": "TCP", 00:20:12.818 "adrfam": "IPv4", 00:20:12.818 "traddr": "10.0.0.2", 00:20:12.818 "trsvcid": "4420" 00:20:12.818 }, 00:20:12.818 "peer_address": { 00:20:12.818 "trtype": "TCP", 00:20:12.818 "adrfam": "IPv4", 00:20:12.818 "traddr": "10.0.0.1", 00:20:12.818 "trsvcid": "46298" 00:20:12.818 }, 00:20:12.818 "auth": { 00:20:12.818 "state": "completed", 00:20:12.818 "digest": "sha512", 00:20:12.818 "dhgroup": "ffdhe3072" 00:20:12.818 } 00:20:12.818 } 00:20:12.818 ]' 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.818 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.075 02:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.009 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.267 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.833 00:20:14.833 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.833 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.833 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.834 { 00:20:14.834 "cntlid": 115, 00:20:14.834 "qid": 0, 00:20:14.834 "state": "enabled", 00:20:14.834 "thread": "nvmf_tgt_poll_group_000", 00:20:14.834 "listen_address": { 00:20:14.834 "trtype": "TCP", 00:20:14.834 "adrfam": "IPv4", 00:20:14.834 "traddr": "10.0.0.2", 00:20:14.834 "trsvcid": "4420" 00:20:14.834 }, 00:20:14.834 "peer_address": { 00:20:14.834 "trtype": "TCP", 00:20:14.834 "adrfam": "IPv4", 00:20:14.834 "traddr": "10.0.0.1", 00:20:14.834 "trsvcid": "46336" 00:20:14.834 }, 00:20:14.834 "auth": { 00:20:14.834 "state": "completed", 00:20:14.834 "digest": "sha512", 00:20:14.834 "dhgroup": "ffdhe3072" 00:20:14.834 } 00:20:14.834 } 00:20:14.834 ]' 00:20:14.834 02:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.092 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.348 02:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.332 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.590 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.848 00:20:16.848 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.848 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.848 02:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.106 { 00:20:17.106 "cntlid": 117, 00:20:17.106 "qid": 0, 00:20:17.106 "state": "enabled", 00:20:17.106 "thread": "nvmf_tgt_poll_group_000", 00:20:17.106 "listen_address": { 00:20:17.106 "trtype": "TCP", 00:20:17.106 "adrfam": "IPv4", 00:20:17.106 "traddr": "10.0.0.2", 00:20:17.106 "trsvcid": "4420" 00:20:17.106 }, 00:20:17.106 "peer_address": { 00:20:17.106 "trtype": "TCP", 00:20:17.106 "adrfam": "IPv4", 00:20:17.106 "traddr": "10.0.0.1", 00:20:17.106 "trsvcid": "46370" 00:20:17.106 }, 00:20:17.106 "auth": { 00:20:17.106 "state": "completed", 00:20:17.106 "digest": "sha512", 00:20:17.106 "dhgroup": "ffdhe3072" 00:20:17.106 } 00:20:17.106 } 00:20:17.106 ]' 00:20:17.106 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.363 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.364 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.622 02:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.554 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.811 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.812 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.812 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.812 02:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.069 00:20:19.069 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.069 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.069 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.326 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.326 { 00:20:19.326 "cntlid": 119, 00:20:19.326 "qid": 0, 00:20:19.326 "state": "enabled", 00:20:19.326 "thread": "nvmf_tgt_poll_group_000", 00:20:19.326 "listen_address": { 00:20:19.326 "trtype": "TCP", 00:20:19.326 "adrfam": "IPv4", 00:20:19.326 "traddr": "10.0.0.2", 00:20:19.326 "trsvcid": "4420" 00:20:19.326 }, 00:20:19.326 "peer_address": { 00:20:19.326 "trtype": "TCP", 00:20:19.326 "adrfam": "IPv4", 00:20:19.326 "traddr": "10.0.0.1", 00:20:19.326 "trsvcid": "46394" 00:20:19.326 }, 00:20:19.326 "auth": { 00:20:19.327 "state": "completed", 00:20:19.327 "digest": "sha512", 00:20:19.327 "dhgroup": "ffdhe3072" 00:20:19.327 } 00:20:19.327 } 00:20:19.327 ]' 00:20:19.327 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.584 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.842 02:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.775 02:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.033 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:21.033 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.033 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.033 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.034 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.599 00:20:21.599 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.599 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.599 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.856 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.856 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.856 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.856 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.857 { 00:20:21.857 "cntlid": 121, 00:20:21.857 "qid": 0, 00:20:21.857 "state": "enabled", 00:20:21.857 "thread": "nvmf_tgt_poll_group_000", 00:20:21.857 "listen_address": { 00:20:21.857 "trtype": "TCP", 00:20:21.857 "adrfam": "IPv4", 00:20:21.857 "traddr": "10.0.0.2", 00:20:21.857 "trsvcid": "4420" 00:20:21.857 }, 00:20:21.857 "peer_address": { 00:20:21.857 "trtype": "TCP", 00:20:21.857 "adrfam": "IPv4", 00:20:21.857 "traddr": "10.0.0.1", 00:20:21.857 "trsvcid": "52538" 00:20:21.857 }, 00:20:21.857 "auth": { 00:20:21.857 "state": "completed", 00:20:21.857 "digest": "sha512", 00:20:21.857 "dhgroup": "ffdhe4096" 00:20:21.857 } 00:20:21.857 } 00:20:21.857 ]' 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.857 02:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.115 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.048 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.305 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:23.305 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.305 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.306 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.564 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.564 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.564 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.822 00:20:23.822 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.822 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.822 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.080 { 00:20:24.080 "cntlid": 123, 00:20:24.080 "qid": 0, 00:20:24.080 "state": "enabled", 00:20:24.080 "thread": "nvmf_tgt_poll_group_000", 00:20:24.080 "listen_address": { 00:20:24.080 "trtype": "TCP", 00:20:24.080 "adrfam": "IPv4", 00:20:24.080 "traddr": "10.0.0.2", 00:20:24.080 "trsvcid": "4420" 00:20:24.080 }, 00:20:24.080 "peer_address": { 00:20:24.080 "trtype": "TCP", 00:20:24.080 "adrfam": "IPv4", 00:20:24.080 "traddr": "10.0.0.1", 00:20:24.080 "trsvcid": "52560" 00:20:24.080 }, 00:20:24.080 "auth": { 00:20:24.080 "state": "completed", 00:20:24.080 "digest": "sha512", 00:20:24.080 "dhgroup": "ffdhe4096" 00:20:24.080 } 00:20:24.080 } 00:20:24.080 ]' 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.080 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.338 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.338 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.338 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.338 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.338 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.596 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.530 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.531 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.788 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.353 00:20:26.353 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.353 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.353 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.353 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.354 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.354 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.354 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.612 { 00:20:26.612 "cntlid": 125, 00:20:26.612 "qid": 0, 00:20:26.612 "state": "enabled", 00:20:26.612 "thread": "nvmf_tgt_poll_group_000", 00:20:26.612 "listen_address": { 00:20:26.612 "trtype": "TCP", 00:20:26.612 "adrfam": "IPv4", 00:20:26.612 "traddr": "10.0.0.2", 00:20:26.612 "trsvcid": "4420" 00:20:26.612 }, 00:20:26.612 "peer_address": { 00:20:26.612 "trtype": "TCP", 00:20:26.612 "adrfam": "IPv4", 00:20:26.612 "traddr": "10.0.0.1", 00:20:26.612 "trsvcid": "52582" 00:20:26.612 }, 00:20:26.612 "auth": { 00:20:26.612 "state": "completed", 00:20:26.612 "digest": "sha512", 00:20:26.612 "dhgroup": "ffdhe4096" 00:20:26.612 } 00:20:26.612 } 00:20:26.612 ]' 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.612 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.870 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:27.802 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.802 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.802 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.802 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.060 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.060 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.060 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.060 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.060 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.316 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.316 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.316 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.573 00:20:28.573 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.573 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.573 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.829 { 00:20:28.829 "cntlid": 127, 00:20:28.829 "qid": 0, 00:20:28.829 "state": "enabled", 00:20:28.829 "thread": "nvmf_tgt_poll_group_000", 00:20:28.829 "listen_address": { 00:20:28.829 "trtype": "TCP", 00:20:28.829 "adrfam": "IPv4", 00:20:28.829 "traddr": "10.0.0.2", 00:20:28.829 "trsvcid": "4420" 00:20:28.829 }, 00:20:28.829 "peer_address": { 00:20:28.829 "trtype": "TCP", 00:20:28.829 "adrfam": "IPv4", 00:20:28.829 "traddr": "10.0.0.1", 00:20:28.829 "trsvcid": "52602" 00:20:28.829 }, 00:20:28.829 "auth": { 00:20:28.829 "state": "completed", 00:20:28.829 "digest": "sha512", 00:20:28.829 "dhgroup": "ffdhe4096" 00:20:28.829 } 00:20:28.829 } 00:20:28.829 ]' 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.829 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.087 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.087 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.087 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.344 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:30.276 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.277 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.534 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.124 00:20:31.124 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.124 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.124 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.382 { 00:20:31.382 "cntlid": 129, 00:20:31.382 "qid": 0, 00:20:31.382 "state": "enabled", 00:20:31.382 "thread": "nvmf_tgt_poll_group_000", 00:20:31.382 "listen_address": { 00:20:31.382 "trtype": "TCP", 00:20:31.382 "adrfam": "IPv4", 00:20:31.382 "traddr": "10.0.0.2", 00:20:31.382 "trsvcid": "4420" 00:20:31.382 }, 00:20:31.382 "peer_address": { 00:20:31.382 "trtype": "TCP", 00:20:31.382 "adrfam": "IPv4", 00:20:31.382 "traddr": "10.0.0.1", 00:20:31.382 "trsvcid": "52626" 00:20:31.382 }, 00:20:31.382 "auth": { 00:20:31.382 "state": "completed", 00:20:31.382 "digest": "sha512", 00:20:31.382 "dhgroup": "ffdhe6144" 00:20:31.382 } 00:20:31.382 } 00:20:31.382 ]' 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.382 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.640 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.571 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.828 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.392 00:20:33.392 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.392 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.392 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.650 { 00:20:33.650 "cntlid": 131, 00:20:33.650 "qid": 0, 00:20:33.650 "state": "enabled", 00:20:33.650 "thread": "nvmf_tgt_poll_group_000", 00:20:33.650 "listen_address": { 00:20:33.650 "trtype": "TCP", 00:20:33.650 "adrfam": "IPv4", 00:20:33.650 "traddr": "10.0.0.2", 00:20:33.650 "trsvcid": "4420" 00:20:33.650 }, 00:20:33.650 "peer_address": { 00:20:33.650 "trtype": "TCP", 00:20:33.650 "adrfam": "IPv4", 00:20:33.650 "traddr": "10.0.0.1", 00:20:33.650 "trsvcid": "48098" 00:20:33.650 }, 00:20:33.650 "auth": { 00:20:33.650 "state": "completed", 00:20:33.650 "digest": "sha512", 00:20:33.650 "dhgroup": "ffdhe6144" 00:20:33.650 } 00:20:33.650 } 00:20:33.650 ]' 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.650 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.907 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.907 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.907 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.907 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.907 02:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.165 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:20:35.095 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.095 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.096 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.353 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.918 00:20:35.918 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.918 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.918 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.176 { 00:20:36.176 "cntlid": 133, 00:20:36.176 "qid": 0, 00:20:36.176 "state": "enabled", 00:20:36.176 "thread": "nvmf_tgt_poll_group_000", 00:20:36.176 "listen_address": { 00:20:36.176 "trtype": "TCP", 00:20:36.176 "adrfam": "IPv4", 00:20:36.176 "traddr": "10.0.0.2", 00:20:36.176 "trsvcid": "4420" 00:20:36.176 }, 00:20:36.176 "peer_address": { 00:20:36.176 "trtype": "TCP", 00:20:36.176 "adrfam": "IPv4", 00:20:36.176 "traddr": "10.0.0.1", 00:20:36.176 "trsvcid": "48112" 00:20:36.176 }, 00:20:36.176 "auth": { 00:20:36.176 "state": "completed", 00:20:36.176 "digest": "sha512", 00:20:36.176 "dhgroup": "ffdhe6144" 00:20:36.176 } 00:20:36.176 } 00:20:36.176 ]' 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.176 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.434 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.367 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.625 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.191 00:20:38.191 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.191 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.191 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.449 { 00:20:38.449 "cntlid": 135, 00:20:38.449 "qid": 0, 00:20:38.449 "state": "enabled", 00:20:38.449 "thread": "nvmf_tgt_poll_group_000", 00:20:38.449 "listen_address": { 00:20:38.449 "trtype": "TCP", 00:20:38.449 "adrfam": "IPv4", 00:20:38.449 "traddr": "10.0.0.2", 00:20:38.449 "trsvcid": "4420" 00:20:38.449 }, 00:20:38.449 "peer_address": { 00:20:38.449 "trtype": "TCP", 00:20:38.449 "adrfam": "IPv4", 00:20:38.449 "traddr": "10.0.0.1", 00:20:38.449 "trsvcid": "48146" 00:20:38.449 }, 00:20:38.449 "auth": { 00:20:38.449 "state": "completed", 00:20:38.449 "digest": "sha512", 00:20:38.449 "dhgroup": "ffdhe6144" 00:20:38.449 } 00:20:38.449 } 00:20:38.449 ]' 00:20:38.449 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.707 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.707 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.707 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.708 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.708 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.708 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.708 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.965 02:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:39.899 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.158 02:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.088 00:20:41.088 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.088 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.088 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.345 { 00:20:41.345 "cntlid": 137, 00:20:41.345 "qid": 0, 00:20:41.345 "state": "enabled", 00:20:41.345 "thread": "nvmf_tgt_poll_group_000", 00:20:41.345 "listen_address": { 00:20:41.345 "trtype": "TCP", 00:20:41.345 "adrfam": "IPv4", 00:20:41.345 "traddr": "10.0.0.2", 00:20:41.345 "trsvcid": "4420" 00:20:41.345 }, 00:20:41.345 "peer_address": { 00:20:41.345 "trtype": "TCP", 00:20:41.345 "adrfam": "IPv4", 00:20:41.345 "traddr": "10.0.0.1", 00:20:41.345 "trsvcid": "48172" 00:20:41.345 }, 00:20:41.345 "auth": { 00:20:41.345 "state": "completed", 00:20:41.345 "digest": "sha512", 00:20:41.345 "dhgroup": "ffdhe8192" 00:20:41.345 } 00:20:41.345 } 00:20:41.345 ]' 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.345 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.602 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.602 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.602 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.860 02:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.792 02:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.050 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.984 00:20:43.984 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.984 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.984 02:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.984 { 00:20:43.984 "cntlid": 139, 00:20:43.984 "qid": 0, 00:20:43.984 "state": "enabled", 00:20:43.984 "thread": "nvmf_tgt_poll_group_000", 00:20:43.984 "listen_address": { 00:20:43.984 "trtype": "TCP", 00:20:43.984 "adrfam": "IPv4", 00:20:43.984 "traddr": "10.0.0.2", 00:20:43.984 "trsvcid": "4420" 00:20:43.984 }, 00:20:43.984 "peer_address": { 00:20:43.984 "trtype": "TCP", 00:20:43.984 "adrfam": "IPv4", 00:20:43.984 "traddr": "10.0.0.1", 00:20:43.984 "trsvcid": "53322" 00:20:43.984 }, 00:20:43.984 "auth": { 00:20:43.984 "state": "completed", 00:20:43.984 "digest": "sha512", 00:20:43.984 "dhgroup": "ffdhe8192" 00:20:43.984 } 00:20:43.984 } 00:20:43.984 ]' 00:20:43.984 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.242 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.500 02:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YWJlZWIwMjhhYTZiM2ExNjk0ODcxOGI4ZDFmZjg1M2ZU9dCm: --dhchap-ctrl-secret DHHC-1:02:MTZmYjM2YmVmYTE4ZTlkZTExYzhjZjA3YjhmMTRhNDlmMDY1MzI5ZTk5ZmMwYjdhya73YQ==: 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.433 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.723 02:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.655 00:20:46.655 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.655 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.655 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.912 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.912 { 00:20:46.912 "cntlid": 141, 00:20:46.912 "qid": 0, 00:20:46.912 "state": "enabled", 00:20:46.912 "thread": "nvmf_tgt_poll_group_000", 00:20:46.913 "listen_address": { 00:20:46.913 "trtype": "TCP", 00:20:46.913 "adrfam": "IPv4", 00:20:46.913 "traddr": "10.0.0.2", 00:20:46.913 "trsvcid": "4420" 00:20:46.913 }, 00:20:46.913 "peer_address": { 00:20:46.913 "trtype": "TCP", 00:20:46.913 "adrfam": "IPv4", 00:20:46.913 "traddr": "10.0.0.1", 00:20:46.913 "trsvcid": "53340" 00:20:46.913 }, 00:20:46.913 "auth": { 00:20:46.913 "state": "completed", 00:20:46.913 "digest": "sha512", 00:20:46.913 "dhgroup": "ffdhe8192" 00:20:46.913 } 00:20:46.913 } 00:20:46.913 ]' 00:20:46.913 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.913 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.913 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.913 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.913 02:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.913 02:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.913 02:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.913 02:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.170 02:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzExNWExNTI2NTdkY2ZjNTQ5MWRiMmM1NmE0ZWMwOWI3OGViZWY5NGYwYWJhMmU3j4no0g==: --dhchap-ctrl-secret DHHC-1:01:NzVmMWE3OGM2YmUxZTMzNTFhYTA4NjY2NzZkNTE4MTRt/HoJ: 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.541 02:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.473 00:20:49.473 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.473 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.474 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.731 { 00:20:49.731 "cntlid": 143, 00:20:49.731 "qid": 0, 00:20:49.731 "state": "enabled", 00:20:49.731 "thread": "nvmf_tgt_poll_group_000", 00:20:49.731 "listen_address": { 00:20:49.731 "trtype": "TCP", 00:20:49.731 "adrfam": "IPv4", 00:20:49.731 "traddr": "10.0.0.2", 00:20:49.731 "trsvcid": "4420" 00:20:49.731 }, 00:20:49.731 "peer_address": { 00:20:49.731 "trtype": "TCP", 00:20:49.731 "adrfam": "IPv4", 00:20:49.731 "traddr": "10.0.0.1", 00:20:49.731 "trsvcid": "53364" 00:20:49.731 }, 00:20:49.731 "auth": { 00:20:49.731 "state": "completed", 00:20:49.731 "digest": "sha512", 00:20:49.731 "dhgroup": "ffdhe8192" 00:20:49.731 } 00:20:49.731 } 00:20:49.731 ]' 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.731 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.989 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.921 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.178 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.110 00:20:52.110 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.110 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.110 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.368 { 00:20:52.368 "cntlid": 145, 00:20:52.368 "qid": 0, 00:20:52.368 "state": "enabled", 00:20:52.368 "thread": "nvmf_tgt_poll_group_000", 00:20:52.368 "listen_address": { 00:20:52.368 "trtype": "TCP", 00:20:52.368 "adrfam": "IPv4", 00:20:52.368 "traddr": "10.0.0.2", 00:20:52.368 "trsvcid": "4420" 00:20:52.368 }, 00:20:52.368 "peer_address": { 00:20:52.368 "trtype": "TCP", 00:20:52.368 "adrfam": "IPv4", 00:20:52.368 "traddr": "10.0.0.1", 00:20:52.368 "trsvcid": "59108" 00:20:52.368 }, 00:20:52.368 "auth": { 00:20:52.368 "state": "completed", 00:20:52.368 "digest": "sha512", 00:20:52.368 "dhgroup": "ffdhe8192" 00:20:52.368 } 00:20:52.368 } 00:20:52.368 ]' 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.368 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.625 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.625 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.625 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.625 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmNmODEwNTZlN2Q2YWU2ZDkwMzkzY2JmNjUyN2VhMjI3Zjk5MmZiN2M3NGNmYTY4t3rn5w==: --dhchap-ctrl-secret DHHC-1:03:MTQxZTJhZmQ0ZDE1YjllYTJkYTQyZmFiYjcwODM3MjFhNjVhYmMwMzdlMWJjNjAxNzA2MWRiODBlNmFhZDlmZbrESVM=: 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.997 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:54.561 request: 00:20:54.561 { 00:20:54.561 "name": "nvme0", 00:20:54.561 "trtype": "tcp", 00:20:54.561 "traddr": "10.0.0.2", 00:20:54.561 "adrfam": "ipv4", 00:20:54.561 "trsvcid": "4420", 00:20:54.561 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.561 "prchk_reftag": false, 00:20:54.561 "prchk_guard": false, 00:20:54.561 "hdgst": false, 00:20:54.561 "ddgst": false, 00:20:54.561 "dhchap_key": "key2", 00:20:54.561 "method": "bdev_nvme_attach_controller", 00:20:54.561 "req_id": 1 00:20:54.561 } 00:20:54.561 Got JSON-RPC error response 00:20:54.561 response: 00:20:54.561 { 00:20:54.561 "code": -5, 00:20:54.561 "message": "Input/output error" 00:20:54.561 } 00:20:54.561 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:54.562 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:55.492 request: 00:20:55.492 { 00:20:55.492 "name": "nvme0", 00:20:55.492 "trtype": "tcp", 00:20:55.492 "traddr": "10.0.0.2", 00:20:55.492 "adrfam": "ipv4", 00:20:55.492 "trsvcid": "4420", 00:20:55.492 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.492 "prchk_reftag": false, 00:20:55.492 "prchk_guard": false, 00:20:55.492 "hdgst": false, 00:20:55.492 "ddgst": false, 00:20:55.492 "dhchap_key": "key1", 00:20:55.492 "dhchap_ctrlr_key": "ckey2", 00:20:55.492 "method": "bdev_nvme_attach_controller", 00:20:55.492 "req_id": 1 00:20:55.492 } 00:20:55.492 Got JSON-RPC error response 00:20:55.492 response: 00:20:55.492 { 00:20:55.492 "code": -5, 00:20:55.492 "message": "Input/output error" 00:20:55.492 } 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.492 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.493 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.424 request: 00:20:56.424 { 00:20:56.424 "name": "nvme0", 00:20:56.424 "trtype": "tcp", 00:20:56.424 "traddr": "10.0.0.2", 00:20:56.424 "adrfam": "ipv4", 00:20:56.424 "trsvcid": "4420", 00:20:56.424 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.424 "prchk_reftag": false, 00:20:56.424 "prchk_guard": false, 00:20:56.424 "hdgst": false, 00:20:56.424 "ddgst": false, 00:20:56.424 "dhchap_key": "key1", 00:20:56.424 "dhchap_ctrlr_key": "ckey1", 00:20:56.424 "method": "bdev_nvme_attach_controller", 00:20:56.424 "req_id": 1 00:20:56.424 } 00:20:56.424 Got JSON-RPC error response 00:20:56.424 response: 00:20:56.424 { 00:20:56.424 "code": -5, 00:20:56.424 "message": "Input/output error" 00:20:56.424 } 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1035770 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1035770 ']' 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1035770 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035770 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035770' 00:20:56.424 killing process with pid 1035770 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1035770 00:20:56.424 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1035770 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1058277 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1058277 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1058277 ']' 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.682 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1058277 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1058277 ']' 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.940 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.198 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:57.198 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:57.198 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:57.198 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.198 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.455 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.386 00:20:58.386 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.386 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.386 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.644 { 00:20:58.644 "cntlid": 1, 00:20:58.644 "qid": 0, 00:20:58.644 "state": "enabled", 00:20:58.644 "thread": "nvmf_tgt_poll_group_000", 00:20:58.644 "listen_address": { 00:20:58.644 "trtype": "TCP", 00:20:58.644 "adrfam": "IPv4", 00:20:58.644 "traddr": "10.0.0.2", 00:20:58.644 "trsvcid": "4420" 00:20:58.644 }, 00:20:58.644 "peer_address": { 00:20:58.644 "trtype": "TCP", 00:20:58.644 "adrfam": "IPv4", 00:20:58.644 "traddr": "10.0.0.1", 00:20:58.644 "trsvcid": "59168" 00:20:58.644 }, 00:20:58.644 "auth": { 00:20:58.644 "state": "completed", 00:20:58.644 "digest": "sha512", 00:20:58.644 "dhgroup": "ffdhe8192" 00:20:58.644 } 00:20:58.644 } 00:20:58.644 ]' 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.644 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.902 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTcxNGJmMmFjNTIzOWU4Y2UzNmZjNjZiODkwMzFjNGRiZjhjYTFkNjg4MjE0NWM5YTM0YTE3M2UzMTNlYmY0MWXEBn4=: 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:59.833 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.091 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.348 request: 00:21:00.348 { 00:21:00.348 "name": "nvme0", 00:21:00.348 "trtype": "tcp", 00:21:00.348 "traddr": "10.0.0.2", 00:21:00.348 "adrfam": "ipv4", 00:21:00.348 "trsvcid": "4420", 00:21:00.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.348 "prchk_reftag": false, 00:21:00.348 "prchk_guard": false, 00:21:00.348 "hdgst": false, 00:21:00.348 "ddgst": false, 00:21:00.348 "dhchap_key": "key3", 00:21:00.348 "method": "bdev_nvme_attach_controller", 00:21:00.348 "req_id": 1 00:21:00.348 } 00:21:00.348 Got JSON-RPC error response 00:21:00.348 response: 00:21:00.348 { 00:21:00.348 "code": -5, 00:21:00.348 "message": "Input/output error" 00:21:00.348 } 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:00.348 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.606 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.863 request: 00:21:00.863 { 00:21:00.863 "name": "nvme0", 00:21:00.863 "trtype": "tcp", 00:21:00.863 "traddr": "10.0.0.2", 00:21:00.863 "adrfam": "ipv4", 00:21:00.863 "trsvcid": "4420", 00:21:00.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.863 "prchk_reftag": false, 00:21:00.863 "prchk_guard": false, 00:21:00.863 "hdgst": false, 00:21:00.863 "ddgst": false, 00:21:00.863 "dhchap_key": "key3", 00:21:00.863 "method": "bdev_nvme_attach_controller", 00:21:00.863 "req_id": 1 00:21:00.863 } 00:21:00.863 Got JSON-RPC error response 00:21:00.863 response: 00:21:00.863 { 00:21:00.863 "code": -5, 00:21:00.863 "message": "Input/output error" 00:21:00.863 } 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:00.863 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:01.120 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.120 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.120 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.120 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.121 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.402 request: 00:21:01.402 { 00:21:01.402 "name": "nvme0", 00:21:01.402 "trtype": "tcp", 00:21:01.402 "traddr": "10.0.0.2", 00:21:01.402 "adrfam": "ipv4", 00:21:01.402 "trsvcid": "4420", 00:21:01.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:01.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.402 "prchk_reftag": false, 00:21:01.402 "prchk_guard": false, 00:21:01.402 "hdgst": false, 00:21:01.402 "ddgst": false, 00:21:01.402 "dhchap_key": "key0", 00:21:01.402 "dhchap_ctrlr_key": "key1", 00:21:01.402 "method": "bdev_nvme_attach_controller", 00:21:01.402 "req_id": 1 00:21:01.402 } 00:21:01.402 Got JSON-RPC error response 00:21:01.402 response: 00:21:01.402 { 00:21:01.402 "code": -5, 00:21:01.402 "message": "Input/output error" 00:21:01.402 } 00:21:01.402 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:01.403 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:01.403 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:01.403 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:01.403 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:01.403 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:01.670 00:21:01.670 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:01.670 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:01.670 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.928 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.928 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.928 02:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1035795 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1035795 ']' 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1035795 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035795 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035795' 00:21:02.185 killing process with pid 1035795 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1035795 00:21:02.185 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1035795 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.749 rmmod nvme_tcp 00:21:02.749 rmmod nvme_fabrics 00:21:02.749 rmmod nvme_keyring 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:02.749 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1058277 ']' 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1058277 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1058277 ']' 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1058277 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1058277 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1058277' 00:21:02.750 killing process with pid 1058277 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1058277 00:21:02.750 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1058277 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.008 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.009 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.009 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.907 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.907 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wgE /tmp/spdk.key-sha256.NNI /tmp/spdk.key-sha384.3fM /tmp/spdk.key-sha512.vq3 /tmp/spdk.key-sha512.whW /tmp/spdk.key-sha384.W8w /tmp/spdk.key-sha256.nXN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:04.907 00:21:04.907 real 3m9.269s 00:21:04.907 user 7m20.759s 00:21:04.907 sys 0m25.049s 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.907 ************************************ 00:21:04.907 END TEST nvmf_auth_target 00:21:04.907 ************************************ 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.907 ************************************ 00:21:04.907 START TEST nvmf_bdevio_no_huge 00:21:04.907 ************************************ 00:21:04.907 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:05.167 * Looking for test storage... 00:21:05.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.167 02:20:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.067 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:07.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:07.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:07.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:07.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:21:07.068 00:21:07.068 --- 10.0.0.2 ping statistics --- 00:21:07.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.068 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:21:07.068 00:21:07.068 --- 10.0.0.1 ping statistics --- 00:21:07.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.068 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1061043 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1061043 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1061043 ']' 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.068 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.327 [2024-07-27 02:20:35.252183] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:07.327 [2024-07-27 02:20:35.252265] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:07.327 [2024-07-27 02:20:35.301112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:07.327 [2024-07-27 02:20:35.319693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.327 [2024-07-27 02:20:35.398606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.327 [2024-07-27 02:20:35.398673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.327 [2024-07-27 02:20:35.398696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.327 [2024-07-27 02:20:35.398707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.327 [2024-07-27 02:20:35.398717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.327 [2024-07-27 02:20:35.398799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:07.327 [2024-07-27 02:20:35.398863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:07.327 [2024-07-27 02:20:35.398928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:07.327 [2024-07-27 02:20:35.398930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.327 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.327 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:07.327 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.327 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.327 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 [2024-07-27 02:20:35.508559] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 Malloc0 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.586 [2024-07-27 02:20:35.546301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.586 { 00:21:07.586 "params": { 00:21:07.586 "name": "Nvme$subsystem", 00:21:07.586 "trtype": "$TEST_TRANSPORT", 00:21:07.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.586 "adrfam": "ipv4", 00:21:07.586 "trsvcid": "$NVMF_PORT", 00:21:07.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.586 "hdgst": ${hdgst:-false}, 00:21:07.586 "ddgst": ${ddgst:-false} 00:21:07.586 }, 00:21:07.586 "method": "bdev_nvme_attach_controller" 00:21:07.586 } 00:21:07.586 EOF 00:21:07.586 )") 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:07.586 02:20:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:07.586 "params": { 00:21:07.586 "name": "Nvme1", 00:21:07.586 "trtype": "tcp", 00:21:07.586 "traddr": "10.0.0.2", 00:21:07.586 "adrfam": "ipv4", 00:21:07.586 "trsvcid": "4420", 00:21:07.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.586 "hdgst": false, 00:21:07.586 "ddgst": false 00:21:07.586 }, 00:21:07.586 "method": "bdev_nvme_attach_controller" 00:21:07.586 }' 00:21:07.586 [2024-07-27 02:20:35.590563] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:07.586 [2024-07-27 02:20:35.590642] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1061074 ] 00:21:07.586 [2024-07-27 02:20:35.632035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:07.586 [2024-07-27 02:20:35.651635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:07.586 [2024-07-27 02:20:35.734478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.586 [2024-07-27 02:20:35.734529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.586 [2024-07-27 02:20:35.734532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.844 I/O targets: 00:21:07.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:07.844 00:21:07.844 00:21:07.844 CUnit - A unit testing framework for C - Version 2.1-3 00:21:07.844 http://cunit.sourceforge.net/ 00:21:07.844 00:21:07.844 00:21:07.844 Suite: bdevio tests on: Nvme1n1 00:21:07.844 Test: blockdev write read block ...passed 00:21:08.102 Test: blockdev write zeroes read block ...passed 00:21:08.102 Test: blockdev write zeroes read no split ...passed 00:21:08.102 Test: blockdev write zeroes read split ...passed 00:21:08.102 Test: blockdev write zeroes read split partial ...passed 00:21:08.102 Test: blockdev reset ...[2024-07-27 02:20:36.153588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:08.102 [2024-07-27 02:20:36.153722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b2330 (9): Bad file descriptor 00:21:08.360 [2024-07-27 02:20:36.291406] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:08.360 passed 00:21:08.360 Test: blockdev write read 8 blocks ...passed 00:21:08.360 Test: blockdev write read size > 128k ...passed 00:21:08.360 Test: blockdev write read invalid size ...passed 00:21:08.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:08.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:08.360 Test: blockdev write read max offset ...passed 00:21:08.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:08.360 Test: blockdev writev readv 8 blocks ...passed 00:21:08.360 Test: blockdev writev readv 30 x 1block ...passed 00:21:08.360 Test: blockdev writev readv block ...passed 00:21:08.360 Test: blockdev writev readv size > 128k ...passed 00:21:08.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:08.360 Test: blockdev comparev and writev ...[2024-07-27 02:20:36.466486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.466522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.466547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.466564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.466975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.466999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.467026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.467044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.467436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.467462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.467483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.467500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.467928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.467955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.360 [2024-07-27 02:20:36.467977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:08.360 [2024-07-27 02:20:36.467993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.360 passed 00:21:08.618 Test: blockdev nvme passthru rw ...passed 00:21:08.618 Test: blockdev nvme passthru vendor specific ...[2024-07-27 02:20:36.551546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.618 [2024-07-27 02:20:36.551578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.618 [2024-07-27 02:20:36.551785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.618 [2024-07-27 02:20:36.551808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.618 [2024-07-27 02:20:36.552011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.618 [2024-07-27 02:20:36.552033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.618 [2024-07-27 02:20:36.552247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.618 [2024-07-27 02:20:36.552273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.618 passed 00:21:08.618 Test: blockdev nvme admin passthru ...passed 00:21:08.618 Test: blockdev copy ...passed 00:21:08.618 00:21:08.618 Run Summary: Type Total Ran Passed Failed Inactive 00:21:08.618 suites 1 1 n/a 0 0 00:21:08.618 tests 23 23 23 0 0 00:21:08.618 asserts 152 152 152 0 n/a 00:21:08.618 00:21:08.618 Elapsed time = 1.352 seconds 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.876 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.877 rmmod nvme_tcp 00:21:08.877 rmmod nvme_fabrics 00:21:08.877 rmmod nvme_keyring 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1061043 ']' 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1061043 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1061043 ']' 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1061043 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.877 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1061043 00:21:08.877 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:08.877 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:08.877 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1061043' 00:21:08.877 killing process with pid 1061043 00:21:08.877 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1061043 00:21:08.877 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1061043 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.445 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:11.351 00:21:11.351 real 0m6.363s 00:21:11.351 user 0m10.526s 00:21:11.351 sys 0m2.451s 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.351 ************************************ 00:21:11.351 END TEST nvmf_bdevio_no_huge 00:21:11.351 ************************************ 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.351 ************************************ 00:21:11.351 START TEST nvmf_tls 00:21:11.351 ************************************ 00:21:11.351 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:11.610 * Looking for test storage... 00:21:11.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.610 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.611 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:13.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:13.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:13.510 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:13.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.510 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:13.769 00:21:13.769 --- 10.0.0.2 ping statistics --- 00:21:13.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.769 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:21:13.769 00:21:13.769 --- 10.0.0.1 ping statistics --- 00:21:13.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.769 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1063141 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1063141 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1063141 ']' 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.769 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.769 [2024-07-27 02:20:41.762858] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:13.769 [2024-07-27 02:20:41.762943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.769 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.769 [2024-07-27 02:20:41.803621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:13.769 [2024-07-27 02:20:41.830564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.769 [2024-07-27 02:20:41.918350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.769 [2024-07-27 02:20:41.918406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.769 [2024-07-27 02:20:41.918419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.769 [2024-07-27 02:20:41.918431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.769 [2024-07-27 02:20:41.918440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.769 [2024-07-27 02:20:41.918466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:14.026 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:14.284 true 00:21:14.284 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.284 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:14.542 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:14.542 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:14.542 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:14.799 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.799 02:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:15.056 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:15.056 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:15.056 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:15.314 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.314 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:15.571 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:15.571 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:15.571 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.571 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:15.829 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:15.829 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:15.829 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:16.086 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.086 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:16.344 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:16.344 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:16.344 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:16.602 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.602 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.UX3Fa9SeDJ 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.SzWOJcS0ts 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.UX3Fa9SeDJ 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SzWOJcS0ts 00:21:16.862 02:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:17.120 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:17.692 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.UX3Fa9SeDJ 00:21:17.692 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UX3Fa9SeDJ 00:21:17.693 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.693 [2024-07-27 02:20:45.806147] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.693 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.005 02:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.263 [2024-07-27 02:20:46.307503] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.263 [2024-07-27 02:20:46.307762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.263 02:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.522 malloc0 00:21:18.522 02:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.780 02:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UX3Fa9SeDJ 00:21:19.038 [2024-07-27 02:20:47.150188] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.038 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UX3Fa9SeDJ 00:21:19.038 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.242 Initializing NVMe Controllers 00:21:31.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.242 Initialization complete. Launching workers. 00:21:31.242 ======================================================== 00:21:31.242 Latency(us) 00:21:31.242 Device Information : IOPS MiB/s Average min max 00:21:31.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7548.39 29.49 8480.91 1265.79 10163.71 00:21:31.242 ======================================================== 00:21:31.242 Total : 7548.39 29.49 8480.91 1265.79 10163.71 00:21:31.242 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UX3Fa9SeDJ 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UX3Fa9SeDJ' 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1065176 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1065176 /var/tmp/bdevperf.sock 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1065176 ']' 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.242 [2024-07-27 02:20:57.329616] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:31.242 [2024-07-27 02:20:57.329708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065176 ] 00:21:31.242 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.242 [2024-07-27 02:20:57.362659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:31.242 [2024-07-27 02:20:57.390103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.242 [2024-07-27 02:20:57.473502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UX3Fa9SeDJ 00:21:31.242 [2024-07-27 02:20:57.852538] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.242 [2024-07-27 02:20:57.852648] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.242 TLSTESTn1 00:21:31.242 02:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.242 Running I/O for 10 seconds... 00:21:41.207 00:21:41.207 Latency(us) 00:21:41.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.207 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.207 Verification LBA range: start 0x0 length 0x2000 00:21:41.207 TLSTESTn1 : 10.05 1285.86 5.02 0.00 0.00 99310.59 6068.15 103304.15 00:21:41.207 =================================================================================================================== 00:21:41.207 Total : 1285.86 5.02 0.00 0.00 99310.59 6068.15 103304.15 00:21:41.207 0 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1065176 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1065176 ']' 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1065176 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1065176 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1065176' 00:21:41.207 killing process with pid 1065176 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1065176 00:21:41.207 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.207 00:21:41.207 Latency(us) 00:21:41.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.207 =================================================================================================================== 00:21:41.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.207 [2024-07-27 02:21:08.172284] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1065176 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SzWOJcS0ts 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SzWOJcS0ts 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SzWOJcS0ts 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SzWOJcS0ts' 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1066961 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.207 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1066961 /var/tmp/bdevperf.sock 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1066961 ']' 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.208 [2024-07-27 02:21:08.437425] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:41.208 [2024-07-27 02:21:08.437516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066961 ] 00:21:41.208 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.208 [2024-07-27 02:21:08.469329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.208 [2024-07-27 02:21:08.496053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.208 [2024-07-27 02:21:08.577481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SzWOJcS0ts 00:21:41.208 [2024-07-27 02:21:08.923638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.208 [2024-07-27 02:21:08.923769] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.208 [2024-07-27 02:21:08.929279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.208 [2024-07-27 02:21:08.929715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace8d0 (107): Transport endpoint is not connected 00:21:41.208 [2024-07-27 02:21:08.930703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace8d0 (9): Bad file descriptor 00:21:41.208 [2024-07-27 02:21:08.931702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.208 [2024-07-27 02:21:08.931723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.208 [2024-07-27 02:21:08.931740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.208 request: 00:21:41.208 { 00:21:41.208 "name": "TLSTEST", 00:21:41.208 "trtype": "tcp", 00:21:41.208 "traddr": "10.0.0.2", 00:21:41.208 "adrfam": "ipv4", 00:21:41.208 "trsvcid": "4420", 00:21:41.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.208 "prchk_reftag": false, 00:21:41.208 "prchk_guard": false, 00:21:41.208 "hdgst": false, 00:21:41.208 "ddgst": false, 00:21:41.208 "psk": "/tmp/tmp.SzWOJcS0ts", 00:21:41.208 "method": "bdev_nvme_attach_controller", 00:21:41.208 "req_id": 1 00:21:41.208 } 00:21:41.208 Got JSON-RPC error response 00:21:41.208 response: 00:21:41.208 { 00:21:41.208 "code": -5, 00:21:41.208 "message": "Input/output error" 00:21:41.208 } 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1066961 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1066961 ']' 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1066961 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1066961 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1066961' 00:21:41.208 killing process with pid 1066961 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1066961 00:21:41.208 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.208 00:21:41.208 Latency(us) 00:21:41.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.208 =================================================================================================================== 00:21:41.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.208 [2024-07-27 02:21:08.984714] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.208 02:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1066961 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UX3Fa9SeDJ 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UX3Fa9SeDJ 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UX3Fa9SeDJ 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UX3Fa9SeDJ' 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1067121 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1067121 /var/tmp/bdevperf.sock 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1067121 ']' 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.208 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.208 [2024-07-27 02:21:09.231221] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:41.208 [2024-07-27 02:21:09.231305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067121 ] 00:21:41.208 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.209 [2024-07-27 02:21:09.266777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.209 [2024-07-27 02:21:09.295259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.467 [2024-07-27 02:21:09.384575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.467 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.467 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.467 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.UX3Fa9SeDJ 00:21:41.725 [2024-07-27 02:21:09.713033] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.725 [2024-07-27 02:21:09.713174] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.725 [2024-07-27 02:21:09.720685] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.725 [2024-07-27 02:21:09.720716] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.725 [2024-07-27 02:21:09.720753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.725 [2024-07-27 02:21:09.721051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12968d0 (107): Transport endpoint is not connected 00:21:41.725 [2024-07-27 02:21:09.722027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12968d0 (9): Bad file descriptor 00:21:41.725 [2024-07-27 02:21:09.723025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.725 [2024-07-27 02:21:09.723064] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.725 [2024-07-27 02:21:09.723084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.725 request: 00:21:41.725 { 00:21:41.725 "name": "TLSTEST", 00:21:41.725 "trtype": "tcp", 00:21:41.725 "traddr": "10.0.0.2", 00:21:41.725 "adrfam": "ipv4", 00:21:41.725 "trsvcid": "4420", 00:21:41.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.725 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.725 "prchk_reftag": false, 00:21:41.725 "prchk_guard": false, 00:21:41.725 "hdgst": false, 00:21:41.725 "ddgst": false, 00:21:41.725 "psk": "/tmp/tmp.UX3Fa9SeDJ", 00:21:41.725 "method": "bdev_nvme_attach_controller", 00:21:41.725 "req_id": 1 00:21:41.725 } 00:21:41.725 Got JSON-RPC error response 00:21:41.725 response: 00:21:41.725 { 00:21:41.725 "code": -5, 00:21:41.725 "message": "Input/output error" 00:21:41.725 } 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1067121 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1067121 ']' 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1067121 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067121 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067121' 00:21:41.725 killing process with pid 1067121 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1067121 00:21:41.725 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.725 00:21:41.725 Latency(us) 00:21:41.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.725 =================================================================================================================== 00:21:41.725 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.725 [2024-07-27 02:21:09.773225] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.725 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1067121 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UX3Fa9SeDJ 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UX3Fa9SeDJ 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UX3Fa9SeDJ 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UX3Fa9SeDJ' 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1067257 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1067257 /var/tmp/bdevperf.sock 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1067257 ']' 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.984 02:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.984 [2024-07-27 02:21:10.040919] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:41.984 [2024-07-27 02:21:10.041021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067257 ] 00:21:41.984 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.984 [2024-07-27 02:21:10.072857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.984 [2024-07-27 02:21:10.101208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.242 [2024-07-27 02:21:10.195912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.242 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.242 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:42.242 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UX3Fa9SeDJ 00:21:42.501 [2024-07-27 02:21:10.540748] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.501 [2024-07-27 02:21:10.540856] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.501 [2024-07-27 02:21:10.552831] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.501 [2024-07-27 02:21:10.552861] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.501 [2024-07-27 02:21:10.552899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.501 [2024-07-27 02:21:10.553778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe708d0 (107): Transport endpoint is not connected 00:21:42.501 [2024-07-27 02:21:10.554769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe708d0 (9): Bad file descriptor 00:21:42.501 [2024-07-27 02:21:10.555767] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.501 [2024-07-27 02:21:10.555787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:42.501 [2024-07-27 02:21:10.555803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.501 request: 00:21:42.501 { 00:21:42.501 "name": "TLSTEST", 00:21:42.501 "trtype": "tcp", 00:21:42.501 "traddr": "10.0.0.2", 00:21:42.501 "adrfam": "ipv4", 00:21:42.501 "trsvcid": "4420", 00:21:42.501 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.501 "prchk_reftag": false, 00:21:42.501 "prchk_guard": false, 00:21:42.501 "hdgst": false, 00:21:42.501 "ddgst": false, 00:21:42.501 "psk": "/tmp/tmp.UX3Fa9SeDJ", 00:21:42.501 "method": "bdev_nvme_attach_controller", 00:21:42.501 "req_id": 1 00:21:42.501 } 00:21:42.501 Got JSON-RPC error response 00:21:42.501 response: 00:21:42.501 { 00:21:42.501 "code": -5, 00:21:42.501 "message": "Input/output error" 00:21:42.501 } 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1067257 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1067257 ']' 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1067257 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067257 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067257' 00:21:42.501 killing process with pid 1067257 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1067257 00:21:42.501 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.501 00:21:42.501 Latency(us) 00:21:42.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.501 =================================================================================================================== 00:21:42.501 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.501 [2024-07-27 02:21:10.605265] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.501 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1067257 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1067393 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1067393 /var/tmp/bdevperf.sock 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1067393 ']' 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.760 02:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.760 [2024-07-27 02:21:10.873840] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:42.760 [2024-07-27 02:21:10.873931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067393 ] 00:21:42.760 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.760 [2024-07-27 02:21:10.905561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:43.018 [2024-07-27 02:21:10.933804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.018 [2024-07-27 02:21:11.014753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.018 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.018 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.018 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:43.276 [2024-07-27 02:21:11.385109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:43.276 [2024-07-27 02:21:11.386976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128ede0 (9): Bad file descriptor 00:21:43.276 [2024-07-27 02:21:11.387971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.276 [2024-07-27 02:21:11.387995] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:43.276 [2024-07-27 02:21:11.388020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.276 request: 00:21:43.276 { 00:21:43.276 "name": "TLSTEST", 00:21:43.276 "trtype": "tcp", 00:21:43.276 "traddr": "10.0.0.2", 00:21:43.276 "adrfam": "ipv4", 00:21:43.276 "trsvcid": "4420", 00:21:43.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.276 "prchk_reftag": false, 00:21:43.276 "prchk_guard": false, 00:21:43.276 "hdgst": false, 00:21:43.276 "ddgst": false, 00:21:43.276 "method": "bdev_nvme_attach_controller", 00:21:43.276 "req_id": 1 00:21:43.276 } 00:21:43.276 Got JSON-RPC error response 00:21:43.276 response: 00:21:43.276 { 00:21:43.276 "code": -5, 00:21:43.276 "message": "Input/output error" 00:21:43.276 } 00:21:43.276 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1067393 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1067393 ']' 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1067393 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067393 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067393' 00:21:43.277 killing process with pid 1067393 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1067393 00:21:43.277 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.277 00:21:43.277 Latency(us) 00:21:43.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.277 =================================================================================================================== 00:21:43.277 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.277 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1067393 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1063141 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1063141 ']' 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1063141 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1063141 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1063141' 00:21:43.535 killing process with pid 1063141 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1063141 00:21:43.535 [2024-07-27 02:21:11.656451] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:43.535 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1063141 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.p74jQDqwoH 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.p74jQDqwoH 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1067542 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1067542 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1067542 ']' 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.793 02:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.052 [2024-07-27 02:21:11.982181] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:44.052 [2024-07-27 02:21:11.982268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.052 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.052 [2024-07-27 02:21:12.020558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.052 [2024-07-27 02:21:12.046619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.052 [2024-07-27 02:21:12.134234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.052 [2024-07-27 02:21:12.134298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.052 [2024-07-27 02:21:12.134313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.052 [2024-07-27 02:21:12.134325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.052 [2024-07-27 02:21:12.134350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.052 [2024-07-27 02:21:12.134383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p74jQDqwoH 00:21:44.310 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:44.569 [2024-07-27 02:21:12.544380] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.569 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:44.826 02:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:45.085 [2024-07-27 02:21:13.101886] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.085 [2024-07-27 02:21:13.102148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.085 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:45.342 malloc0 00:21:45.342 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:45.599 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:21:45.857 [2024-07-27 02:21:13.950841] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p74jQDqwoH 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p74jQDqwoH' 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1067827 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1067827 /var/tmp/bdevperf.sock 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1067827 ']' 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.857 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.857 [2024-07-27 02:21:14.016447] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:45.857 [2024-07-27 02:21:14.016532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067827 ] 00:21:46.115 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.115 [2024-07-27 02:21:14.047895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:46.115 [2024-07-27 02:21:14.074522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.115 [2024-07-27 02:21:14.157557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.115 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.115 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:46.115 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:21:46.681 [2024-07-27 02:21:14.540329] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.681 [2024-07-27 02:21:14.540452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.681 TLSTESTn1 00:21:46.681 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:46.681 Running I/O for 10 seconds... 00:21:58.909 00:21:58.909 Latency(us) 00:21:58.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.909 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.909 Verification LBA range: start 0x0 length 0x2000 00:21:58.909 TLSTESTn1 : 10.08 1194.71 4.67 0.00 0.00 106767.73 7136.14 147577.36 00:21:58.909 =================================================================================================================== 00:21:58.909 Total : 1194.71 4.67 0.00 0.00 106767.73 7136.14 147577.36 00:21:58.909 0 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1067827 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1067827 ']' 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1067827 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067827 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067827' 00:21:58.909 killing process with pid 1067827 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1067827 00:21:58.909 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.909 00:21:58.909 Latency(us) 00:21:58.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.909 =================================================================================================================== 00:21:58.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.909 [2024-07-27 02:21:24.907723] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:58.909 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1067827 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.p74jQDqwoH 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p74jQDqwoH 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p74jQDqwoH 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p74jQDqwoH 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p74jQDqwoH' 00:21:58.909 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1069064 /var/tmp/bdevperf.sock 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1069064 ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.910 [2024-07-27 02:21:25.173981] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:58.910 [2024-07-27 02:21:25.174083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069064 ] 00:21:58.910 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.910 [2024-07-27 02:21:25.206393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.910 [2024-07-27 02:21:25.231661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.910 [2024-07-27 02:21:25.315420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:21:58.910 [2024-07-27 02:21:25.659721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.910 [2024-07-27 02:21:25.659792] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:58.910 [2024-07-27 02:21:25.659805] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.p74jQDqwoH 00:21:58.910 request: 00:21:58.910 { 00:21:58.910 "name": "TLSTEST", 00:21:58.910 "trtype": "tcp", 00:21:58.910 "traddr": "10.0.0.2", 00:21:58.910 "adrfam": "ipv4", 00:21:58.910 "trsvcid": "4420", 00:21:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.910 "prchk_reftag": false, 00:21:58.910 "prchk_guard": false, 00:21:58.910 "hdgst": false, 00:21:58.910 "ddgst": false, 00:21:58.910 "psk": "/tmp/tmp.p74jQDqwoH", 00:21:58.910 "method": "bdev_nvme_attach_controller", 00:21:58.910 "req_id": 1 00:21:58.910 } 00:21:58.910 Got JSON-RPC error response 00:21:58.910 response: 00:21:58.910 { 00:21:58.910 "code": -1, 00:21:58.910 "message": "Operation not permitted" 00:21:58.910 } 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1069064 ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069064' 00:21:58.910 killing process with pid 1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1069064 00:21:58.910 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.910 00:21:58.910 Latency(us) 00:21:58.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.910 =================================================================================================================== 00:21:58.910 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1069064 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1067542 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1067542 ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1067542 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067542 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067542' 00:21:58.910 killing process with pid 1067542 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1067542 00:21:58.910 [2024-07-27 02:21:25.915457] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:58.910 02:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1067542 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1069169 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1069169 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1069169 ']' 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.910 [2024-07-27 02:21:26.190717] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:21:58.910 [2024-07-27 02:21:26.190806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.910 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.910 [2024-07-27 02:21:26.228672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.910 [2024-07-27 02:21:26.261106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.910 [2024-07-27 02:21:26.355944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.910 [2024-07-27 02:21:26.356009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.910 [2024-07-27 02:21:26.356034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.910 [2024-07-27 02:21:26.356048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.910 [2024-07-27 02:21:26.356066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.910 [2024-07-27 02:21:26.356110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.910 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p74jQDqwoH 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.911 [2024-07-27 02:21:26.739569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.911 02:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.911 02:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.169 [2024-07-27 02:21:27.236928] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.169 [2024-07-27 02:21:27.237215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.169 02:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.427 malloc0 00:21:59.427 02:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.685 02:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:21:59.943 [2024-07-27 02:21:28.002225] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:59.943 [2024-07-27 02:21:28.002268] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:59.943 [2024-07-27 02:21:28.002310] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:59.943 request: 00:21:59.943 { 00:21:59.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.943 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.943 "psk": "/tmp/tmp.p74jQDqwoH", 00:21:59.943 "method": "nvmf_subsystem_add_host", 00:21:59.943 "req_id": 1 00:21:59.943 } 00:21:59.943 Got JSON-RPC error response 00:21:59.943 response: 00:21:59.943 { 00:21:59.943 "code": -32603, 00:21:59.943 "message": "Internal error" 00:21:59.943 } 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1069169 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1069169 ']' 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1069169 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069169 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069169' 00:21:59.943 killing process with pid 1069169 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1069169 00:21:59.943 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1069169 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.p74jQDqwoH 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1069463 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1069463 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1069463 ']' 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.200 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.200 [2024-07-27 02:21:28.343547] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:00.200 [2024-07-27 02:21:28.343638] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.461 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.461 [2024-07-27 02:21:28.383717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.461 [2024-07-27 02:21:28.410028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.461 [2024-07-27 02:21:28.497691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.461 [2024-07-27 02:21:28.497748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.461 [2024-07-27 02:21:28.497761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.461 [2024-07-27 02:21:28.497772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.461 [2024-07-27 02:21:28.497781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.461 [2024-07-27 02:21:28.497813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.461 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.461 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:00.461 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.461 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.461 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.719 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.719 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:22:00.719 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p74jQDqwoH 00:22:00.719 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.719 [2024-07-27 02:21:28.855625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.719 02:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:00.975 02:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.232 [2024-07-27 02:21:29.336929] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.232 [2024-07-27 02:21:29.337190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.232 02:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.489 malloc0 00:22:01.747 02:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.005 02:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:22:02.263 [2024-07-27 02:21:30.194202] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1069746 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1069746 /var/tmp/bdevperf.sock 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1069746 ']' 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.263 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.263 [2024-07-27 02:21:30.259588] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:02.263 [2024-07-27 02:21:30.259673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069746 ] 00:22:02.263 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.263 [2024-07-27 02:21:30.292132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.263 [2024-07-27 02:21:30.318945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.263 [2024-07-27 02:21:30.401917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.521 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.521 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.521 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:22:02.779 [2024-07-27 02:21:30.781454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.779 [2024-07-27 02:21:30.781569] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.779 TLSTESTn1 00:22:02.779 02:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:03.344 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:03.344 "subsystems": [ 00:22:03.344 { 00:22:03.344 "subsystem": "keyring", 00:22:03.344 "config": [] 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "subsystem": "iobuf", 00:22:03.344 "config": [ 00:22:03.344 { 00:22:03.344 "method": "iobuf_set_options", 00:22:03.344 "params": { 00:22:03.344 "small_pool_count": 8192, 00:22:03.344 "large_pool_count": 1024, 00:22:03.344 "small_bufsize": 8192, 00:22:03.344 "large_bufsize": 135168 00:22:03.344 } 00:22:03.344 } 00:22:03.344 ] 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "subsystem": "sock", 00:22:03.344 "config": [ 00:22:03.344 { 00:22:03.344 "method": "sock_set_default_impl", 00:22:03.344 "params": { 00:22:03.344 "impl_name": "posix" 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "sock_impl_set_options", 00:22:03.344 "params": { 00:22:03.344 "impl_name": "ssl", 00:22:03.344 "recv_buf_size": 4096, 00:22:03.344 "send_buf_size": 4096, 00:22:03.344 "enable_recv_pipe": true, 00:22:03.344 "enable_quickack": false, 00:22:03.344 "enable_placement_id": 0, 00:22:03.344 "enable_zerocopy_send_server": true, 00:22:03.344 "enable_zerocopy_send_client": false, 00:22:03.344 "zerocopy_threshold": 0, 00:22:03.344 "tls_version": 0, 00:22:03.344 "enable_ktls": false 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "sock_impl_set_options", 00:22:03.344 "params": { 00:22:03.344 "impl_name": "posix", 00:22:03.344 "recv_buf_size": 2097152, 00:22:03.344 "send_buf_size": 2097152, 00:22:03.344 "enable_recv_pipe": true, 00:22:03.344 "enable_quickack": false, 00:22:03.344 "enable_placement_id": 0, 00:22:03.344 "enable_zerocopy_send_server": true, 00:22:03.344 "enable_zerocopy_send_client": false, 00:22:03.344 "zerocopy_threshold": 0, 00:22:03.344 "tls_version": 0, 00:22:03.344 "enable_ktls": false 00:22:03.344 } 00:22:03.344 } 00:22:03.344 ] 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "subsystem": "vmd", 00:22:03.344 "config": [] 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "subsystem": "accel", 00:22:03.344 "config": [ 00:22:03.344 { 00:22:03.344 "method": "accel_set_options", 00:22:03.344 "params": { 00:22:03.344 "small_cache_size": 128, 00:22:03.344 "large_cache_size": 16, 00:22:03.344 "task_count": 2048, 00:22:03.344 "sequence_count": 2048, 00:22:03.344 "buf_count": 2048 00:22:03.344 } 00:22:03.344 } 00:22:03.344 ] 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "subsystem": "bdev", 00:22:03.344 "config": [ 00:22:03.344 { 00:22:03.344 "method": "bdev_set_options", 00:22:03.344 "params": { 00:22:03.344 "bdev_io_pool_size": 65535, 00:22:03.344 "bdev_io_cache_size": 256, 00:22:03.344 "bdev_auto_examine": true, 00:22:03.344 "iobuf_small_cache_size": 128, 00:22:03.344 "iobuf_large_cache_size": 16 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_raid_set_options", 00:22:03.344 "params": { 00:22:03.344 "process_window_size_kb": 1024, 00:22:03.344 "process_max_bandwidth_mb_sec": 0 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_iscsi_set_options", 00:22:03.344 "params": { 00:22:03.344 "timeout_sec": 30 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_nvme_set_options", 00:22:03.344 "params": { 00:22:03.344 "action_on_timeout": "none", 00:22:03.344 "timeout_us": 0, 00:22:03.344 "timeout_admin_us": 0, 00:22:03.344 "keep_alive_timeout_ms": 10000, 00:22:03.344 "arbitration_burst": 0, 00:22:03.344 "low_priority_weight": 0, 00:22:03.344 "medium_priority_weight": 0, 00:22:03.344 "high_priority_weight": 0, 00:22:03.344 "nvme_adminq_poll_period_us": 10000, 00:22:03.344 "nvme_ioq_poll_period_us": 0, 00:22:03.344 "io_queue_requests": 0, 00:22:03.344 "delay_cmd_submit": true, 00:22:03.344 "transport_retry_count": 4, 00:22:03.344 "bdev_retry_count": 3, 00:22:03.344 "transport_ack_timeout": 0, 00:22:03.344 "ctrlr_loss_timeout_sec": 0, 00:22:03.344 "reconnect_delay_sec": 0, 00:22:03.344 "fast_io_fail_timeout_sec": 0, 00:22:03.344 "disable_auto_failback": false, 00:22:03.344 "generate_uuids": false, 00:22:03.344 "transport_tos": 0, 00:22:03.344 "nvme_error_stat": false, 00:22:03.344 "rdma_srq_size": 0, 00:22:03.344 "io_path_stat": false, 00:22:03.344 "allow_accel_sequence": false, 00:22:03.344 "rdma_max_cq_size": 0, 00:22:03.344 "rdma_cm_event_timeout_ms": 0, 00:22:03.344 "dhchap_digests": [ 00:22:03.344 "sha256", 00:22:03.344 "sha384", 00:22:03.344 "sha512" 00:22:03.344 ], 00:22:03.344 "dhchap_dhgroups": [ 00:22:03.344 "null", 00:22:03.344 "ffdhe2048", 00:22:03.344 "ffdhe3072", 00:22:03.344 "ffdhe4096", 00:22:03.344 "ffdhe6144", 00:22:03.344 "ffdhe8192" 00:22:03.344 ] 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_nvme_set_hotplug", 00:22:03.344 "params": { 00:22:03.344 "period_us": 100000, 00:22:03.344 "enable": false 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_malloc_create", 00:22:03.344 "params": { 00:22:03.344 "name": "malloc0", 00:22:03.344 "num_blocks": 8192, 00:22:03.344 "block_size": 4096, 00:22:03.344 "physical_block_size": 4096, 00:22:03.344 "uuid": "922ca646-8f69-4054-937d-3741d94f2a10", 00:22:03.344 "optimal_io_boundary": 0, 00:22:03.344 "md_size": 0, 00:22:03.344 "dif_type": 0, 00:22:03.344 "dif_is_head_of_md": false, 00:22:03.344 "dif_pi_format": 0 00:22:03.344 } 00:22:03.344 }, 00:22:03.344 { 00:22:03.344 "method": "bdev_wait_for_examine" 00:22:03.344 } 00:22:03.345 ] 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "subsystem": "nbd", 00:22:03.345 "config": [] 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "subsystem": "scheduler", 00:22:03.345 "config": [ 00:22:03.345 { 00:22:03.345 "method": "framework_set_scheduler", 00:22:03.345 "params": { 00:22:03.345 "name": "static" 00:22:03.345 } 00:22:03.345 } 00:22:03.345 ] 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "subsystem": "nvmf", 00:22:03.345 "config": [ 00:22:03.345 { 00:22:03.345 "method": "nvmf_set_config", 00:22:03.345 "params": { 00:22:03.345 "discovery_filter": "match_any", 00:22:03.345 "admin_cmd_passthru": { 00:22:03.345 "identify_ctrlr": false 00:22:03.345 } 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_set_max_subsystems", 00:22:03.345 "params": { 00:22:03.345 "max_subsystems": 1024 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_set_crdt", 00:22:03.345 "params": { 00:22:03.345 "crdt1": 0, 00:22:03.345 "crdt2": 0, 00:22:03.345 "crdt3": 0 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_create_transport", 00:22:03.345 "params": { 00:22:03.345 "trtype": "TCP", 00:22:03.345 "max_queue_depth": 128, 00:22:03.345 "max_io_qpairs_per_ctrlr": 127, 00:22:03.345 "in_capsule_data_size": 4096, 00:22:03.345 "max_io_size": 131072, 00:22:03.345 "io_unit_size": 131072, 00:22:03.345 "max_aq_depth": 128, 00:22:03.345 "num_shared_buffers": 511, 00:22:03.345 "buf_cache_size": 4294967295, 00:22:03.345 "dif_insert_or_strip": false, 00:22:03.345 "zcopy": false, 00:22:03.345 "c2h_success": false, 00:22:03.345 "sock_priority": 0, 00:22:03.345 "abort_timeout_sec": 1, 00:22:03.345 "ack_timeout": 0, 00:22:03.345 "data_wr_pool_size": 0 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_create_subsystem", 00:22:03.345 "params": { 00:22:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.345 "allow_any_host": false, 00:22:03.345 "serial_number": "SPDK00000000000001", 00:22:03.345 "model_number": "SPDK bdev Controller", 00:22:03.345 "max_namespaces": 10, 00:22:03.345 "min_cntlid": 1, 00:22:03.345 "max_cntlid": 65519, 00:22:03.345 "ana_reporting": false 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_subsystem_add_host", 00:22:03.345 "params": { 00:22:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.345 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.345 "psk": "/tmp/tmp.p74jQDqwoH" 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_subsystem_add_ns", 00:22:03.345 "params": { 00:22:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.345 "namespace": { 00:22:03.345 "nsid": 1, 00:22:03.345 "bdev_name": "malloc0", 00:22:03.345 "nguid": "922CA6468F694054937D3741D94F2A10", 00:22:03.345 "uuid": "922ca646-8f69-4054-937d-3741d94f2a10", 00:22:03.345 "no_auto_visible": false 00:22:03.345 } 00:22:03.345 } 00:22:03.345 }, 00:22:03.345 { 00:22:03.345 "method": "nvmf_subsystem_add_listener", 00:22:03.345 "params": { 00:22:03.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.345 "listen_address": { 00:22:03.345 "trtype": "TCP", 00:22:03.345 "adrfam": "IPv4", 00:22:03.345 "traddr": "10.0.0.2", 00:22:03.345 "trsvcid": "4420" 00:22:03.345 }, 00:22:03.345 "secure_channel": true 00:22:03.345 } 00:22:03.345 } 00:22:03.345 ] 00:22:03.345 } 00:22:03.345 ] 00:22:03.345 }' 00:22:03.345 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:03.603 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:03.603 "subsystems": [ 00:22:03.603 { 00:22:03.603 "subsystem": "keyring", 00:22:03.603 "config": [] 00:22:03.603 }, 00:22:03.603 { 00:22:03.603 "subsystem": "iobuf", 00:22:03.603 "config": [ 00:22:03.603 { 00:22:03.603 "method": "iobuf_set_options", 00:22:03.603 "params": { 00:22:03.603 "small_pool_count": 8192, 00:22:03.603 "large_pool_count": 1024, 00:22:03.604 "small_bufsize": 8192, 00:22:03.604 "large_bufsize": 135168 00:22:03.604 } 00:22:03.604 } 00:22:03.604 ] 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "subsystem": "sock", 00:22:03.604 "config": [ 00:22:03.604 { 00:22:03.604 "method": "sock_set_default_impl", 00:22:03.604 "params": { 00:22:03.604 "impl_name": "posix" 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "sock_impl_set_options", 00:22:03.604 "params": { 00:22:03.604 "impl_name": "ssl", 00:22:03.604 "recv_buf_size": 4096, 00:22:03.604 "send_buf_size": 4096, 00:22:03.604 "enable_recv_pipe": true, 00:22:03.604 "enable_quickack": false, 00:22:03.604 "enable_placement_id": 0, 00:22:03.604 "enable_zerocopy_send_server": true, 00:22:03.604 "enable_zerocopy_send_client": false, 00:22:03.604 "zerocopy_threshold": 0, 00:22:03.604 "tls_version": 0, 00:22:03.604 "enable_ktls": false 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "sock_impl_set_options", 00:22:03.604 "params": { 00:22:03.604 "impl_name": "posix", 00:22:03.604 "recv_buf_size": 2097152, 00:22:03.604 "send_buf_size": 2097152, 00:22:03.604 "enable_recv_pipe": true, 00:22:03.604 "enable_quickack": false, 00:22:03.604 "enable_placement_id": 0, 00:22:03.604 "enable_zerocopy_send_server": true, 00:22:03.604 "enable_zerocopy_send_client": false, 00:22:03.604 "zerocopy_threshold": 0, 00:22:03.604 "tls_version": 0, 00:22:03.604 "enable_ktls": false 00:22:03.604 } 00:22:03.604 } 00:22:03.604 ] 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "subsystem": "vmd", 00:22:03.604 "config": [] 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "subsystem": "accel", 00:22:03.604 "config": [ 00:22:03.604 { 00:22:03.604 "method": "accel_set_options", 00:22:03.604 "params": { 00:22:03.604 "small_cache_size": 128, 00:22:03.604 "large_cache_size": 16, 00:22:03.604 "task_count": 2048, 00:22:03.604 "sequence_count": 2048, 00:22:03.604 "buf_count": 2048 00:22:03.604 } 00:22:03.604 } 00:22:03.604 ] 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "subsystem": "bdev", 00:22:03.604 "config": [ 00:22:03.604 { 00:22:03.604 "method": "bdev_set_options", 00:22:03.604 "params": { 00:22:03.604 "bdev_io_pool_size": 65535, 00:22:03.604 "bdev_io_cache_size": 256, 00:22:03.604 "bdev_auto_examine": true, 00:22:03.604 "iobuf_small_cache_size": 128, 00:22:03.604 "iobuf_large_cache_size": 16 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_raid_set_options", 00:22:03.604 "params": { 00:22:03.604 "process_window_size_kb": 1024, 00:22:03.604 "process_max_bandwidth_mb_sec": 0 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_iscsi_set_options", 00:22:03.604 "params": { 00:22:03.604 "timeout_sec": 30 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_nvme_set_options", 00:22:03.604 "params": { 00:22:03.604 "action_on_timeout": "none", 00:22:03.604 "timeout_us": 0, 00:22:03.604 "timeout_admin_us": 0, 00:22:03.604 "keep_alive_timeout_ms": 10000, 00:22:03.604 "arbitration_burst": 0, 00:22:03.604 "low_priority_weight": 0, 00:22:03.604 "medium_priority_weight": 0, 00:22:03.604 "high_priority_weight": 0, 00:22:03.604 "nvme_adminq_poll_period_us": 10000, 00:22:03.604 "nvme_ioq_poll_period_us": 0, 00:22:03.604 "io_queue_requests": 512, 00:22:03.604 "delay_cmd_submit": true, 00:22:03.604 "transport_retry_count": 4, 00:22:03.604 "bdev_retry_count": 3, 00:22:03.604 "transport_ack_timeout": 0, 00:22:03.604 "ctrlr_loss_timeout_sec": 0, 00:22:03.604 "reconnect_delay_sec": 0, 00:22:03.604 "fast_io_fail_timeout_sec": 0, 00:22:03.604 "disable_auto_failback": false, 00:22:03.604 "generate_uuids": false, 00:22:03.604 "transport_tos": 0, 00:22:03.604 "nvme_error_stat": false, 00:22:03.604 "rdma_srq_size": 0, 00:22:03.604 "io_path_stat": false, 00:22:03.604 "allow_accel_sequence": false, 00:22:03.604 "rdma_max_cq_size": 0, 00:22:03.604 "rdma_cm_event_timeout_ms": 0, 00:22:03.604 "dhchap_digests": [ 00:22:03.604 "sha256", 00:22:03.604 "sha384", 00:22:03.604 "sha512" 00:22:03.604 ], 00:22:03.604 "dhchap_dhgroups": [ 00:22:03.604 "null", 00:22:03.604 "ffdhe2048", 00:22:03.604 "ffdhe3072", 00:22:03.604 "ffdhe4096", 00:22:03.604 "ffdhe6144", 00:22:03.604 "ffdhe8192" 00:22:03.604 ] 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_nvme_attach_controller", 00:22:03.604 "params": { 00:22:03.604 "name": "TLSTEST", 00:22:03.604 "trtype": "TCP", 00:22:03.604 "adrfam": "IPv4", 00:22:03.604 "traddr": "10.0.0.2", 00:22:03.604 "trsvcid": "4420", 00:22:03.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.604 "prchk_reftag": false, 00:22:03.604 "prchk_guard": false, 00:22:03.604 "ctrlr_loss_timeout_sec": 0, 00:22:03.604 "reconnect_delay_sec": 0, 00:22:03.604 "fast_io_fail_timeout_sec": 0, 00:22:03.604 "psk": "/tmp/tmp.p74jQDqwoH", 00:22:03.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.604 "hdgst": false, 00:22:03.604 "ddgst": false 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_nvme_set_hotplug", 00:22:03.604 "params": { 00:22:03.604 "period_us": 100000, 00:22:03.604 "enable": false 00:22:03.604 } 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "method": "bdev_wait_for_examine" 00:22:03.604 } 00:22:03.604 ] 00:22:03.604 }, 00:22:03.604 { 00:22:03.604 "subsystem": "nbd", 00:22:03.604 "config": [] 00:22:03.604 } 00:22:03.604 ] 00:22:03.604 }' 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1069746 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1069746 ']' 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1069746 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069746 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.604 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069746' 00:22:03.604 killing process with pid 1069746 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1069746 00:22:03.605 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.605 00:22:03.605 Latency(us) 00:22:03.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.605 =================================================================================================================== 00:22:03.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.605 [2024-07-27 02:21:31.533308] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1069746 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1069463 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1069463 ']' 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1069463 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.605 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069463 00:22:03.862 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.862 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.862 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069463' 00:22:03.862 killing process with pid 1069463 00:22:03.862 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1069463 00:22:03.862 [2024-07-27 02:21:31.779450] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.862 02:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1069463 00:22:04.121 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:04.121 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.121 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:04.121 "subsystems": [ 00:22:04.121 { 00:22:04.121 "subsystem": "keyring", 00:22:04.121 "config": [] 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "subsystem": "iobuf", 00:22:04.121 "config": [ 00:22:04.121 { 00:22:04.121 "method": "iobuf_set_options", 00:22:04.121 "params": { 00:22:04.121 "small_pool_count": 8192, 00:22:04.121 "large_pool_count": 1024, 00:22:04.121 "small_bufsize": 8192, 00:22:04.121 "large_bufsize": 135168 00:22:04.121 } 00:22:04.121 } 00:22:04.121 ] 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "subsystem": "sock", 00:22:04.121 "config": [ 00:22:04.121 { 00:22:04.121 "method": "sock_set_default_impl", 00:22:04.121 "params": { 00:22:04.121 "impl_name": "posix" 00:22:04.121 } 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "method": "sock_impl_set_options", 00:22:04.121 "params": { 00:22:04.121 "impl_name": "ssl", 00:22:04.121 "recv_buf_size": 4096, 00:22:04.121 "send_buf_size": 4096, 00:22:04.121 "enable_recv_pipe": true, 00:22:04.121 "enable_quickack": false, 00:22:04.121 "enable_placement_id": 0, 00:22:04.121 "enable_zerocopy_send_server": true, 00:22:04.121 "enable_zerocopy_send_client": false, 00:22:04.121 "zerocopy_threshold": 0, 00:22:04.121 "tls_version": 0, 00:22:04.121 "enable_ktls": false 00:22:04.121 } 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "method": "sock_impl_set_options", 00:22:04.121 "params": { 00:22:04.121 "impl_name": "posix", 00:22:04.121 "recv_buf_size": 2097152, 00:22:04.121 "send_buf_size": 2097152, 00:22:04.121 "enable_recv_pipe": true, 00:22:04.121 "enable_quickack": false, 00:22:04.121 "enable_placement_id": 0, 00:22:04.121 "enable_zerocopy_send_server": true, 00:22:04.121 "enable_zerocopy_send_client": false, 00:22:04.121 "zerocopy_threshold": 0, 00:22:04.121 "tls_version": 0, 00:22:04.121 "enable_ktls": false 00:22:04.121 } 00:22:04.121 } 00:22:04.121 ] 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "subsystem": "vmd", 00:22:04.121 "config": [] 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "subsystem": "accel", 00:22:04.121 "config": [ 00:22:04.121 { 00:22:04.121 "method": "accel_set_options", 00:22:04.121 "params": { 00:22:04.121 "small_cache_size": 128, 00:22:04.121 "large_cache_size": 16, 00:22:04.121 "task_count": 2048, 00:22:04.121 "sequence_count": 2048, 00:22:04.121 "buf_count": 2048 00:22:04.121 } 00:22:04.121 } 00:22:04.121 ] 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "subsystem": "bdev", 00:22:04.121 "config": [ 00:22:04.121 { 00:22:04.121 "method": "bdev_set_options", 00:22:04.121 "params": { 00:22:04.121 "bdev_io_pool_size": 65535, 00:22:04.121 "bdev_io_cache_size": 256, 00:22:04.121 "bdev_auto_examine": true, 00:22:04.121 "iobuf_small_cache_size": 128, 00:22:04.121 "iobuf_large_cache_size": 16 00:22:04.121 } 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "method": "bdev_raid_set_options", 00:22:04.121 "params": { 00:22:04.121 "process_window_size_kb": 1024, 00:22:04.121 "process_max_bandwidth_mb_sec": 0 00:22:04.121 } 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "method": "bdev_iscsi_set_options", 00:22:04.121 "params": { 00:22:04.121 "timeout_sec": 30 00:22:04.121 } 00:22:04.121 }, 00:22:04.121 { 00:22:04.121 "method": "bdev_nvme_set_options", 00:22:04.121 "params": { 00:22:04.121 "action_on_timeout": "none", 00:22:04.121 "timeout_us": 0, 00:22:04.121 "timeout_admin_us": 0, 00:22:04.122 "keep_alive_timeout_ms": 10000, 00:22:04.122 "arbitration_burst": 0, 00:22:04.122 "low_priority_weight": 0, 00:22:04.122 "medium_priority_weight": 0, 00:22:04.122 "high_priority_weight": 0, 00:22:04.122 "nvme_adminq_poll_period_us": 10000, 00:22:04.122 "nvme_ioq_poll_period_us": 0, 00:22:04.122 "io_queue_requests": 0, 00:22:04.122 "delay_cmd_submit": true, 00:22:04.122 "transport_retry_count": 4, 00:22:04.122 "bdev_retry_count": 3, 00:22:04.122 "transport_ack_timeout": 0, 00:22:04.122 "ctrlr_loss_timeout_sec": 0, 00:22:04.122 "reconnect_delay_sec": 0, 00:22:04.122 "fast_io_fail_timeout_sec": 0, 00:22:04.122 "disable_auto_failback": false, 00:22:04.122 "generate_uuids": false, 00:22:04.122 "transport_tos": 0, 00:22:04.122 "nvme_error_stat": false, 00:22:04.122 "rdma_srq_size": 0, 00:22:04.122 "io_path_stat": false, 00:22:04.122 "allow_accel_sequence": false, 00:22:04.122 "rdma_max_cq_size": 0, 00:22:04.122 "rdma_cm_event_timeout_ms": 0, 00:22:04.122 "dhchap_digests": [ 00:22:04.122 "sha256", 00:22:04.122 "sha384", 00:22:04.122 "sha512" 00:22:04.122 ], 00:22:04.122 "dhchap_dhgroups": [ 00:22:04.122 "null", 00:22:04.122 "ffdhe2048", 00:22:04.122 "ffdhe3072", 00:22:04.122 "ffdhe4096", 00:22:04.122 "ffdhe6144", 00:22:04.122 "ffdhe8192" 00:22:04.122 ] 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "bdev_nvme_set_hotplug", 00:22:04.122 "params": { 00:22:04.122 "period_us": 100000, 00:22:04.122 "enable": false 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "bdev_malloc_create", 00:22:04.122 "params": { 00:22:04.122 "name": "malloc0", 00:22:04.122 "num_blocks": 8192, 00:22:04.122 "block_size": 4096, 00:22:04.122 "physical_block_size": 4096, 00:22:04.122 "uuid": "922ca646-8f69-4054-937d-3741d94f2a10", 00:22:04.122 "optimal_io_boundary": 0, 00:22:04.122 "md_size": 0, 00:22:04.122 "dif_type": 0, 00:22:04.122 "dif_is_head_of_md": false, 00:22:04.122 "dif_pi_format": 0 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "bdev_wait_for_examine" 00:22:04.122 } 00:22:04.122 ] 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "subsystem": "nbd", 00:22:04.122 "config": [] 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "subsystem": "scheduler", 00:22:04.122 "config": [ 00:22:04.122 { 00:22:04.122 "method": "framework_set_scheduler", 00:22:04.122 "params": { 00:22:04.122 "name": "static" 00:22:04.122 } 00:22:04.122 } 00:22:04.122 ] 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "subsystem": "nvmf", 00:22:04.122 "config": [ 00:22:04.122 { 00:22:04.122 "method": "nvmf_set_config", 00:22:04.122 "params": { 00:22:04.122 "discovery_filter": "match_any", 00:22:04.122 "admin_cmd_passthru": { 00:22:04.122 "identify_ctrlr": false 00:22:04.122 } 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_set_max_subsystems", 00:22:04.122 "params": { 00:22:04.122 "max_subsystems": 1024 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_set_crdt", 00:22:04.122 "params": { 00:22:04.122 "crdt1": 0, 00:22:04.122 "crdt2": 0, 00:22:04.122 "crdt3": 0 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_create_transport", 00:22:04.122 "params": { 00:22:04.122 "trtype": "TCP", 00:22:04.122 "max_queue_depth": 128, 00:22:04.122 "max_io_qpairs_per_ctrlr": 127, 00:22:04.122 "in_capsule_data_size": 4096, 00:22:04.122 "max_io_size": 131072, 00:22:04.122 "io_unit_size": 131072, 00:22:04.122 "max_aq_depth": 128, 00:22:04.122 "num_shared_buffers": 511, 00:22:04.122 "buf_cache_size": 4294967295, 00:22:04.122 "dif_insert_or_strip": false, 00:22:04.122 "zcopy": false, 00:22:04.122 "c2h_success": false, 00:22:04.122 "sock_priority": 0, 00:22:04.122 "abort_timeout_sec": 1, 00:22:04.122 "ack_timeout": 0, 00:22:04.122 "data_wr_pool_size": 0 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_create_subsystem", 00:22:04.122 "params": { 00:22:04.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.122 "allow_any_host": false, 00:22:04.122 "serial_number": "SPDK00000000000001", 00:22:04.122 "model_number": "SPDK bdev Controller", 00:22:04.122 "max_namespaces": 10, 00:22:04.122 "min_cntlid": 1, 00:22:04.122 "max_cntlid": 65519, 00:22:04.122 "ana_reporting": false 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_subsystem_add_host", 00:22:04.122 "params": { 00:22:04.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.122 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.122 "psk": "/tmp/tmp.p74jQDqwoH" 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_subsystem_add_ns", 00:22:04.122 "params": { 00:22:04.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.122 "namespace": { 00:22:04.122 "nsid": 1, 00:22:04.122 "bdev_name": "malloc0", 00:22:04.122 "nguid": "922CA6468F694054937D3741D94F2A10", 00:22:04.122 "uuid": "922ca646-8f69-4054-937d-3741d94f2a10", 00:22:04.122 "no_auto_visible": false 00:22:04.122 } 00:22:04.122 } 00:22:04.122 }, 00:22:04.122 { 00:22:04.122 "method": "nvmf_subsystem_add_listener", 00:22:04.122 "params": { 00:22:04.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.122 "listen_address": { 00:22:04.122 "trtype": "TCP", 00:22:04.122 "adrfam": "IPv4", 00:22:04.122 "traddr": "10.0.0.2", 00:22:04.122 "trsvcid": "4420" 00:22:04.122 }, 00:22:04.122 "secure_channel": true 00:22:04.122 } 00:22:04.122 } 00:22:04.122 ] 00:22:04.122 } 00:22:04.122 ] 00:22:04.122 }' 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1070010 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1070010 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1070010 ']' 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.122 02:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.122 [2024-07-27 02:21:32.081053] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:04.122 [2024-07-27 02:21:32.081153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.122 [2024-07-27 02:21:32.118272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:04.122 [2024-07-27 02:21:32.150583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.122 [2024-07-27 02:21:32.238658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.123 [2024-07-27 02:21:32.238711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.123 [2024-07-27 02:21:32.238736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.123 [2024-07-27 02:21:32.238750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.123 [2024-07-27 02:21:32.238763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.123 [2024-07-27 02:21:32.238854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.381 [2024-07-27 02:21:32.476473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.381 [2024-07-27 02:21:32.500888] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.381 [2024-07-27 02:21:32.516967] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.381 [2024-07-27 02:21:32.517272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1070075 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1070075 /var/tmp/bdevperf.sock 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1070075 ']' 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.947 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:04.947 "subsystems": [ 00:22:04.947 { 00:22:04.947 "subsystem": "keyring", 00:22:04.947 "config": [] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "iobuf", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "iobuf_set_options", 00:22:04.947 "params": { 00:22:04.947 "small_pool_count": 8192, 00:22:04.947 "large_pool_count": 1024, 00:22:04.947 "small_bufsize": 8192, 00:22:04.947 "large_bufsize": 135168 00:22:04.947 } 00:22:04.947 } 00:22:04.947 ] 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "subsystem": "sock", 00:22:04.947 "config": [ 00:22:04.947 { 00:22:04.947 "method": "sock_set_default_impl", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "posix" 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "sock_impl_set_options", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "ssl", 00:22:04.947 "recv_buf_size": 4096, 00:22:04.947 "send_buf_size": 4096, 00:22:04.947 "enable_recv_pipe": true, 00:22:04.947 "enable_quickack": false, 00:22:04.947 "enable_placement_id": 0, 00:22:04.947 "enable_zerocopy_send_server": true, 00:22:04.947 "enable_zerocopy_send_client": false, 00:22:04.947 "zerocopy_threshold": 0, 00:22:04.947 "tls_version": 0, 00:22:04.947 "enable_ktls": false 00:22:04.947 } 00:22:04.947 }, 00:22:04.947 { 00:22:04.947 "method": "sock_impl_set_options", 00:22:04.947 "params": { 00:22:04.947 "impl_name": "posix", 00:22:04.947 "recv_buf_size": 2097152, 00:22:04.947 "send_buf_size": 2097152, 00:22:04.947 "enable_recv_pipe": true, 00:22:04.947 "enable_quickack": false, 00:22:04.947 "enable_placement_id": 0, 00:22:04.947 "enable_zerocopy_send_server": true, 00:22:04.947 "enable_zerocopy_send_client": false, 00:22:04.947 "zerocopy_threshold": 0, 00:22:04.947 "tls_version": 0, 00:22:04.948 "enable_ktls": false 00:22:04.948 } 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "subsystem": "vmd", 00:22:04.948 "config": [] 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "subsystem": "accel", 00:22:04.948 "config": [ 00:22:04.948 { 00:22:04.948 "method": "accel_set_options", 00:22:04.948 "params": { 00:22:04.948 "small_cache_size": 128, 00:22:04.948 "large_cache_size": 16, 00:22:04.948 "task_count": 2048, 00:22:04.948 "sequence_count": 2048, 00:22:04.948 "buf_count": 2048 00:22:04.948 } 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "subsystem": "bdev", 00:22:04.948 "config": [ 00:22:04.948 { 00:22:04.948 "method": "bdev_set_options", 00:22:04.948 "params": { 00:22:04.948 "bdev_io_pool_size": 65535, 00:22:04.948 "bdev_io_cache_size": 256, 00:22:04.948 "bdev_auto_examine": true, 00:22:04.948 "iobuf_small_cache_size": 128, 00:22:04.948 "iobuf_large_cache_size": 16 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_raid_set_options", 00:22:04.948 "params": { 00:22:04.948 "process_window_size_kb": 1024, 00:22:04.948 "process_max_bandwidth_mb_sec": 0 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_iscsi_set_options", 00:22:04.948 "params": { 00:22:04.948 "timeout_sec": 30 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_nvme_set_options", 00:22:04.948 "params": { 00:22:04.948 "action_on_timeout": "none", 00:22:04.948 "timeout_us": 0, 00:22:04.948 "timeout_admin_us": 0, 00:22:04.948 "keep_alive_timeout_ms": 10000, 00:22:04.948 "arbitration_burst": 0, 00:22:04.948 "low_priority_weight": 0, 00:22:04.948 "medium_priority_weight": 0, 00:22:04.948 "high_priority_weight": 0, 00:22:04.948 "nvme_adminq_poll_period_us": 10000, 00:22:04.948 "nvme_ioq_poll_period_us": 0, 00:22:04.948 "io_queue_requests": 512, 00:22:04.948 "delay_cmd_submit": true, 00:22:04.948 "transport_retry_count": 4, 00:22:04.948 "bdev_retry_count": 3, 00:22:04.948 "transport_ack_timeout": 0, 00:22:04.948 "ctrlr_loss_timeout_sec": 0, 00:22:04.948 "reconnect_delay_sec": 0, 00:22:04.948 "fast_io_fail_timeout_sec": 0, 00:22:04.948 "disable_auto_failback": false, 00:22:04.948 "generate_uuids": false, 00:22:04.948 "transport_tos": 0, 00:22:04.948 "nvme_error_stat": false, 00:22:04.948 "rdma_srq_size": 0, 00:22:04.948 "io_path_stat": false, 00:22:04.948 "allow_accel_sequence": false, 00:22:04.948 "rdma_max_cq_size": 0, 00:22:04.948 "rdma_cm_event_timeout_ms": 0, 00:22:04.948 "dhchap_digests": [ 00:22:04.948 "sha256", 00:22:04.948 "sha384", 00:22:04.948 "sha512" 00:22:04.948 ], 00:22:04.948 "dhchap_dhgroups": [ 00:22:04.948 "null", 00:22:04.948 "ffdhe2048", 00:22:04.948 "ffdhe3072", 00:22:04.948 "ffdhe4096", 00:22:04.948 "ffdhe6144", 00:22:04.948 "ffdhe8192" 00:22:04.948 ] 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_nvme_attach_controller", 00:22:04.948 "params": { 00:22:04.948 "name": "TLSTEST", 00:22:04.948 "trtype": "TCP", 00:22:04.948 "adrfam": "IPv4", 00:22:04.948 "traddr": "10.0.0.2", 00:22:04.948 "trsvcid": "4420", 00:22:04.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.948 "prchk_reftag": false, 00:22:04.948 "prchk_guard": false, 00:22:04.948 "ctrlr_loss_timeout_sec": 0, 00:22:04.948 "reconnect_delay_sec": 0, 00:22:04.948 "fast_io_fail_timeout_sec": 0, 00:22:04.948 "psk": "/tmp/tmp.p74jQDqwoH", 00:22:04.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.948 "hdgst": false, 00:22:04.948 "ddgst": false 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_nvme_set_hotplug", 00:22:04.948 "params": { 00:22:04.948 "period_us": 100000, 00:22:04.948 "enable": false 00:22:04.948 } 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "method": "bdev_wait_for_examine" 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }, 00:22:04.948 { 00:22:04.948 "subsystem": "nbd", 00:22:04.948 "config": [] 00:22:04.948 } 00:22:04.948 ] 00:22:04.948 }' 00:22:04.948 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.948 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.948 02:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.207 [2024-07-27 02:21:33.137454] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:05.207 [2024-07-27 02:21:33.137536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070075 ] 00:22:05.207 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.207 [2024-07-27 02:21:33.175657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:05.207 [2024-07-27 02:21:33.204529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.207 [2024-07-27 02:21:33.297774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.464 [2024-07-27 02:21:33.463069] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.464 [2024-07-27 02:21:33.463185] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.028 02:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.028 02:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:06.028 02:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.286 Running I/O for 10 seconds... 00:22:16.253 00:22:16.253 Latency(us) 00:22:16.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:16.253 Verification LBA range: start 0x0 length 0x2000 00:22:16.253 TLSTESTn1 : 10.09 1225.09 4.79 0.00 0.00 104109.97 6796.33 142140.30 00:22:16.253 =================================================================================================================== 00:22:16.253 Total : 1225.09 4.79 0.00 0.00 104109.97 6796.33 142140.30 00:22:16.253 0 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1070075 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1070075 ']' 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1070075 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.253 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070075 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070075' 00:22:16.510 killing process with pid 1070075 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1070075 00:22:16.510 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.510 00:22:16.510 Latency(us) 00:22:16.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.510 =================================================================================================================== 00:22:16.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.510 [2024-07-27 02:21:44.422019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1070075 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1070010 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1070010 ']' 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1070010 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070010 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070010' 00:22:16.510 killing process with pid 1070010 00:22:16.510 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1070010 00:22:16.511 [2024-07-27 02:21:44.667663] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.511 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1070010 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1071498 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1071498 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1071498 ']' 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.769 02:21:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.049 [2024-07-27 02:21:44.971525] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:17.049 [2024-07-27 02:21:44.971622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.049 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.049 [2024-07-27 02:21:45.008221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.049 [2024-07-27 02:21:45.040272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.049 [2024-07-27 02:21:45.127275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.049 [2024-07-27 02:21:45.127355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.049 [2024-07-27 02:21:45.127373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.049 [2024-07-27 02:21:45.127387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.049 [2024-07-27 02:21:45.127399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.049 [2024-07-27 02:21:45.127429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.p74jQDqwoH 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p74jQDqwoH 00:22:17.311 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.569 [2024-07-27 02:21:45.510382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.569 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:17.827 02:21:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:18.086 [2024-07-27 02:21:46.011732] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.086 [2024-07-27 02:21:46.011995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.086 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.344 malloc0 00:22:18.344 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.602 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p74jQDqwoH 00:22:18.602 [2024-07-27 02:21:46.745005] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1071670 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1071670 /var/tmp/bdevperf.sock 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1071670 ']' 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.860 02:21:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.860 [2024-07-27 02:21:46.802356] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:18.860 [2024-07-27 02:21:46.802420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071670 ] 00:22:18.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.860 [2024-07-27 02:21:46.837036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.860 [2024-07-27 02:21:46.865217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.860 [2024-07-27 02:21:46.954220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.119 02:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.119 02:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.119 02:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p74jQDqwoH 00:22:19.376 02:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:19.376 [2024-07-27 02:21:47.531861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.635 nvme0n1 00:22:19.635 02:21:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.635 Running I/O for 1 seconds... 00:22:21.009 00:22:21.009 Latency(us) 00:22:21.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.009 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:21.009 Verification LBA range: start 0x0 length 0x2000 00:22:21.009 nvme0n1 : 1.06 1666.91 6.51 0.00 0.00 74933.33 8398.32 103304.15 00:22:21.009 =================================================================================================================== 00:22:21.009 Total : 1666.91 6.51 0.00 0.00 74933.33 8398.32 103304.15 00:22:21.009 0 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1071670 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1071670 ']' 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1071670 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071670 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071670' 00:22:21.009 killing process with pid 1071670 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1071670 00:22:21.009 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.009 00:22:21.009 Latency(us) 00:22:21.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.009 =================================================================================================================== 00:22:21.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.009 02:21:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1071670 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1071498 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1071498 ']' 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1071498 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071498 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071498' 00:22:21.009 killing process with pid 1071498 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1071498 00:22:21.009 [2024-07-27 02:21:49.114982] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:21.009 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1071498 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1072061 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1072061 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1072061 ']' 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.269 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.269 [2024-07-27 02:21:49.413661] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:21.269 [2024-07-27 02:21:49.413744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.528 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.528 [2024-07-27 02:21:49.450531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.528 [2024-07-27 02:21:49.482326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.528 [2024-07-27 02:21:49.571276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.528 [2024-07-27 02:21:49.571343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.528 [2024-07-27 02:21:49.571360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.528 [2024-07-27 02:21:49.571373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.528 [2024-07-27 02:21:49.571385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.528 [2024-07-27 02:21:49.571428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.528 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.528 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:21.528 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.528 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.528 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.786 [2024-07-27 02:21:49.710457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.786 malloc0 00:22:21.786 [2024-07-27 02:21:49.742055] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.786 [2024-07-27 02:21:49.753278] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1072090 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1072090 /var/tmp/bdevperf.sock 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1072090 ']' 00:22:21.786 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.787 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.787 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.787 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.787 02:21:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.787 [2024-07-27 02:21:49.820561] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:21.787 [2024-07-27 02:21:49.820648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072090 ] 00:22:21.787 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.787 [2024-07-27 02:21:49.852953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.787 [2024-07-27 02:21:49.879842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.045 [2024-07-27 02:21:49.966798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.045 02:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.045 02:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:22.045 02:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p74jQDqwoH 00:22:22.301 02:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:22.558 [2024-07-27 02:21:50.553080] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.558 nvme0n1 00:22:22.558 02:21:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:22.815 Running I/O for 1 seconds... 00:22:23.749 00:22:23.749 Latency(us) 00:22:23.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.749 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:23.749 Verification LBA range: start 0x0 length 0x2000 00:22:23.749 nvme0n1 : 1.07 1634.06 6.38 0.00 0.00 76251.60 6941.96 115731.72 00:22:23.749 =================================================================================================================== 00:22:23.749 Total : 1634.06 6.38 0.00 0.00 76251.60 6941.96 115731.72 00:22:23.749 0 00:22:23.749 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:23.749 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.749 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.007 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.007 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:24.007 "subsystems": [ 00:22:24.007 { 00:22:24.007 "subsystem": "keyring", 00:22:24.007 "config": [ 00:22:24.007 { 00:22:24.007 "method": "keyring_file_add_key", 00:22:24.007 "params": { 00:22:24.007 "name": "key0", 00:22:24.007 "path": "/tmp/tmp.p74jQDqwoH" 00:22:24.007 } 00:22:24.007 } 00:22:24.007 ] 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "subsystem": "iobuf", 00:22:24.007 "config": [ 00:22:24.007 { 00:22:24.007 "method": "iobuf_set_options", 00:22:24.007 "params": { 00:22:24.007 "small_pool_count": 8192, 00:22:24.007 "large_pool_count": 1024, 00:22:24.007 "small_bufsize": 8192, 00:22:24.007 "large_bufsize": 135168 00:22:24.007 } 00:22:24.007 } 00:22:24.007 ] 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "subsystem": "sock", 00:22:24.007 "config": [ 00:22:24.007 { 00:22:24.007 "method": "sock_set_default_impl", 00:22:24.007 "params": { 00:22:24.007 "impl_name": "posix" 00:22:24.007 } 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "method": "sock_impl_set_options", 00:22:24.007 "params": { 00:22:24.007 "impl_name": "ssl", 00:22:24.007 "recv_buf_size": 4096, 00:22:24.007 "send_buf_size": 4096, 00:22:24.007 "enable_recv_pipe": true, 00:22:24.007 "enable_quickack": false, 00:22:24.007 "enable_placement_id": 0, 00:22:24.007 "enable_zerocopy_send_server": true, 00:22:24.007 "enable_zerocopy_send_client": false, 00:22:24.007 "zerocopy_threshold": 0, 00:22:24.007 "tls_version": 0, 00:22:24.007 "enable_ktls": false 00:22:24.007 } 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "method": "sock_impl_set_options", 00:22:24.007 "params": { 00:22:24.007 "impl_name": "posix", 00:22:24.007 "recv_buf_size": 2097152, 00:22:24.007 "send_buf_size": 2097152, 00:22:24.007 "enable_recv_pipe": true, 00:22:24.007 "enable_quickack": false, 00:22:24.007 "enable_placement_id": 0, 00:22:24.007 "enable_zerocopy_send_server": true, 00:22:24.007 "enable_zerocopy_send_client": false, 00:22:24.007 "zerocopy_threshold": 0, 00:22:24.007 "tls_version": 0, 00:22:24.007 "enable_ktls": false 00:22:24.007 } 00:22:24.007 } 00:22:24.007 ] 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "subsystem": "vmd", 00:22:24.007 "config": [] 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "subsystem": "accel", 00:22:24.007 "config": [ 00:22:24.007 { 00:22:24.007 "method": "accel_set_options", 00:22:24.007 "params": { 00:22:24.007 "small_cache_size": 128, 00:22:24.007 "large_cache_size": 16, 00:22:24.007 "task_count": 2048, 00:22:24.007 "sequence_count": 2048, 00:22:24.007 "buf_count": 2048 00:22:24.007 } 00:22:24.007 } 00:22:24.007 ] 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "subsystem": "bdev", 00:22:24.007 "config": [ 00:22:24.007 { 00:22:24.007 "method": "bdev_set_options", 00:22:24.007 "params": { 00:22:24.007 "bdev_io_pool_size": 65535, 00:22:24.007 "bdev_io_cache_size": 256, 00:22:24.007 "bdev_auto_examine": true, 00:22:24.007 "iobuf_small_cache_size": 128, 00:22:24.007 "iobuf_large_cache_size": 16 00:22:24.007 } 00:22:24.007 }, 00:22:24.007 { 00:22:24.007 "method": "bdev_raid_set_options", 00:22:24.007 "params": { 00:22:24.007 "process_window_size_kb": 1024, 00:22:24.007 "process_max_bandwidth_mb_sec": 0 00:22:24.007 } 00:22:24.007 }, 00:22:24.008 { 00:22:24.008 "method": "bdev_iscsi_set_options", 00:22:24.008 "params": { 00:22:24.008 "timeout_sec": 30 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "bdev_nvme_set_options", 00:22:24.008 "params": { 00:22:24.008 "action_on_timeout": "none", 00:22:24.008 "timeout_us": 0, 00:22:24.008 "timeout_admin_us": 0, 00:22:24.008 "keep_alive_timeout_ms": 10000, 00:22:24.008 "arbitration_burst": 0, 00:22:24.008 "low_priority_weight": 0, 00:22:24.008 "medium_priority_weight": 0, 00:22:24.008 "high_priority_weight": 0, 00:22:24.008 "nvme_adminq_poll_period_us": 10000, 00:22:24.008 "nvme_ioq_poll_period_us": 0, 00:22:24.008 "io_queue_requests": 0, 00:22:24.008 "delay_cmd_submit": true, 00:22:24.008 "transport_retry_count": 4, 00:22:24.008 "bdev_retry_count": 3, 00:22:24.008 "transport_ack_timeout": 0, 00:22:24.008 "ctrlr_loss_timeout_sec": 0, 00:22:24.008 "reconnect_delay_sec": 0, 00:22:24.008 "fast_io_fail_timeout_sec": 0, 00:22:24.008 "disable_auto_failback": false, 00:22:24.008 "generate_uuids": false, 00:22:24.008 "transport_tos": 0, 00:22:24.008 "nvme_error_stat": false, 00:22:24.008 "rdma_srq_size": 0, 00:22:24.008 "io_path_stat": false, 00:22:24.008 "allow_accel_sequence": false, 00:22:24.008 "rdma_max_cq_size": 0, 00:22:24.008 "rdma_cm_event_timeout_ms": 0, 00:22:24.008 "dhchap_digests": [ 00:22:24.008 "sha256", 00:22:24.008 "sha384", 00:22:24.008 "sha512" 00:22:24.008 ], 00:22:24.008 "dhchap_dhgroups": [ 00:22:24.008 "null", 00:22:24.008 "ffdhe2048", 00:22:24.008 "ffdhe3072", 00:22:24.008 "ffdhe4096", 00:22:24.008 "ffdhe6144", 00:22:24.008 "ffdhe8192" 00:22:24.008 ] 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "bdev_nvme_set_hotplug", 00:22:24.008 "params": { 00:22:24.008 "period_us": 100000, 00:22:24.008 "enable": false 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "bdev_malloc_create", 00:22:24.008 "params": { 00:22:24.008 "name": "malloc0", 00:22:24.008 "num_blocks": 8192, 00:22:24.008 "block_size": 4096, 00:22:24.008 "physical_block_size": 4096, 00:22:24.008 "uuid": "f0bfeb90-1b69-48d1-85a2-3ba0a2e9f391", 00:22:24.008 "optimal_io_boundary": 0, 00:22:24.008 "md_size": 0, 00:22:24.008 "dif_type": 0, 00:22:24.008 "dif_is_head_of_md": false, 00:22:24.008 "dif_pi_format": 0 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "bdev_wait_for_examine" 00:22:24.008 } 00:22:24.008 ] 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "subsystem": "nbd", 00:22:24.008 "config": [] 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "subsystem": "scheduler", 00:22:24.008 "config": [ 00:22:24.008 { 00:22:24.008 "method": "framework_set_scheduler", 00:22:24.008 "params": { 00:22:24.008 "name": "static" 00:22:24.008 } 00:22:24.008 } 00:22:24.008 ] 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "subsystem": "nvmf", 00:22:24.008 "config": [ 00:22:24.008 { 00:22:24.008 "method": "nvmf_set_config", 00:22:24.008 "params": { 00:22:24.008 "discovery_filter": "match_any", 00:22:24.008 "admin_cmd_passthru": { 00:22:24.008 "identify_ctrlr": false 00:22:24.008 } 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_set_max_subsystems", 00:22:24.008 "params": { 00:22:24.008 "max_subsystems": 1024 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_set_crdt", 00:22:24.008 "params": { 00:22:24.008 "crdt1": 0, 00:22:24.008 "crdt2": 0, 00:22:24.008 "crdt3": 0 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_create_transport", 00:22:24.008 "params": { 00:22:24.008 "trtype": "TCP", 00:22:24.008 "max_queue_depth": 128, 00:22:24.008 "max_io_qpairs_per_ctrlr": 127, 00:22:24.008 "in_capsule_data_size": 4096, 00:22:24.008 "max_io_size": 131072, 00:22:24.008 "io_unit_size": 131072, 00:22:24.008 "max_aq_depth": 128, 00:22:24.008 "num_shared_buffers": 511, 00:22:24.008 "buf_cache_size": 4294967295, 00:22:24.008 "dif_insert_or_strip": false, 00:22:24.008 "zcopy": false, 00:22:24.008 "c2h_success": false, 00:22:24.008 "sock_priority": 0, 00:22:24.008 "abort_timeout_sec": 1, 00:22:24.008 "ack_timeout": 0, 00:22:24.008 "data_wr_pool_size": 0 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_create_subsystem", 00:22:24.008 "params": { 00:22:24.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.008 "allow_any_host": false, 00:22:24.008 "serial_number": "00000000000000000000", 00:22:24.008 "model_number": "SPDK bdev Controller", 00:22:24.008 "max_namespaces": 32, 00:22:24.008 "min_cntlid": 1, 00:22:24.008 "max_cntlid": 65519, 00:22:24.008 "ana_reporting": false 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_subsystem_add_host", 00:22:24.008 "params": { 00:22:24.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.008 "host": "nqn.2016-06.io.spdk:host1", 00:22:24.008 "psk": "key0" 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_subsystem_add_ns", 00:22:24.008 "params": { 00:22:24.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.008 "namespace": { 00:22:24.008 "nsid": 1, 00:22:24.008 "bdev_name": "malloc0", 00:22:24.008 "nguid": "F0BFEB901B6948D185A23BA0A2E9F391", 00:22:24.008 "uuid": "f0bfeb90-1b69-48d1-85a2-3ba0a2e9f391", 00:22:24.008 "no_auto_visible": false 00:22:24.008 } 00:22:24.008 } 00:22:24.008 }, 00:22:24.008 { 00:22:24.008 "method": "nvmf_subsystem_add_listener", 00:22:24.008 "params": { 00:22:24.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.008 "listen_address": { 00:22:24.008 "trtype": "TCP", 00:22:24.008 "adrfam": "IPv4", 00:22:24.008 "traddr": "10.0.0.2", 00:22:24.008 "trsvcid": "4420" 00:22:24.008 }, 00:22:24.008 "secure_channel": false, 00:22:24.008 "sock_impl": "ssl" 00:22:24.008 } 00:22:24.008 } 00:22:24.008 ] 00:22:24.008 } 00:22:24.008 ] 00:22:24.008 }' 00:22:24.008 02:21:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:24.266 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:24.266 "subsystems": [ 00:22:24.266 { 00:22:24.266 "subsystem": "keyring", 00:22:24.266 "config": [ 00:22:24.266 { 00:22:24.266 "method": "keyring_file_add_key", 00:22:24.266 "params": { 00:22:24.266 "name": "key0", 00:22:24.266 "path": "/tmp/tmp.p74jQDqwoH" 00:22:24.266 } 00:22:24.266 } 00:22:24.266 ] 00:22:24.266 }, 00:22:24.266 { 00:22:24.266 "subsystem": "iobuf", 00:22:24.266 "config": [ 00:22:24.266 { 00:22:24.266 "method": "iobuf_set_options", 00:22:24.266 "params": { 00:22:24.266 "small_pool_count": 8192, 00:22:24.266 "large_pool_count": 1024, 00:22:24.266 "small_bufsize": 8192, 00:22:24.266 "large_bufsize": 135168 00:22:24.267 } 00:22:24.267 } 00:22:24.267 ] 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "subsystem": "sock", 00:22:24.267 "config": [ 00:22:24.267 { 00:22:24.267 "method": "sock_set_default_impl", 00:22:24.267 "params": { 00:22:24.267 "impl_name": "posix" 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "sock_impl_set_options", 00:22:24.267 "params": { 00:22:24.267 "impl_name": "ssl", 00:22:24.267 "recv_buf_size": 4096, 00:22:24.267 "send_buf_size": 4096, 00:22:24.267 "enable_recv_pipe": true, 00:22:24.267 "enable_quickack": false, 00:22:24.267 "enable_placement_id": 0, 00:22:24.267 "enable_zerocopy_send_server": true, 00:22:24.267 "enable_zerocopy_send_client": false, 00:22:24.267 "zerocopy_threshold": 0, 00:22:24.267 "tls_version": 0, 00:22:24.267 "enable_ktls": false 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "sock_impl_set_options", 00:22:24.267 "params": { 00:22:24.267 "impl_name": "posix", 00:22:24.267 "recv_buf_size": 2097152, 00:22:24.267 "send_buf_size": 2097152, 00:22:24.267 "enable_recv_pipe": true, 00:22:24.267 "enable_quickack": false, 00:22:24.267 "enable_placement_id": 0, 00:22:24.267 "enable_zerocopy_send_server": true, 00:22:24.267 "enable_zerocopy_send_client": false, 00:22:24.267 "zerocopy_threshold": 0, 00:22:24.267 "tls_version": 0, 00:22:24.267 "enable_ktls": false 00:22:24.267 } 00:22:24.267 } 00:22:24.267 ] 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "subsystem": "vmd", 00:22:24.267 "config": [] 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "subsystem": "accel", 00:22:24.267 "config": [ 00:22:24.267 { 00:22:24.267 "method": "accel_set_options", 00:22:24.267 "params": { 00:22:24.267 "small_cache_size": 128, 00:22:24.267 "large_cache_size": 16, 00:22:24.267 "task_count": 2048, 00:22:24.267 "sequence_count": 2048, 00:22:24.267 "buf_count": 2048 00:22:24.267 } 00:22:24.267 } 00:22:24.267 ] 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "subsystem": "bdev", 00:22:24.267 "config": [ 00:22:24.267 { 00:22:24.267 "method": "bdev_set_options", 00:22:24.267 "params": { 00:22:24.267 "bdev_io_pool_size": 65535, 00:22:24.267 "bdev_io_cache_size": 256, 00:22:24.267 "bdev_auto_examine": true, 00:22:24.267 "iobuf_small_cache_size": 128, 00:22:24.267 "iobuf_large_cache_size": 16 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_raid_set_options", 00:22:24.267 "params": { 00:22:24.267 "process_window_size_kb": 1024, 00:22:24.267 "process_max_bandwidth_mb_sec": 0 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_iscsi_set_options", 00:22:24.267 "params": { 00:22:24.267 "timeout_sec": 30 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_nvme_set_options", 00:22:24.267 "params": { 00:22:24.267 "action_on_timeout": "none", 00:22:24.267 "timeout_us": 0, 00:22:24.267 "timeout_admin_us": 0, 00:22:24.267 "keep_alive_timeout_ms": 10000, 00:22:24.267 "arbitration_burst": 0, 00:22:24.267 "low_priority_weight": 0, 00:22:24.267 "medium_priority_weight": 0, 00:22:24.267 "high_priority_weight": 0, 00:22:24.267 "nvme_adminq_poll_period_us": 10000, 00:22:24.267 "nvme_ioq_poll_period_us": 0, 00:22:24.267 "io_queue_requests": 512, 00:22:24.267 "delay_cmd_submit": true, 00:22:24.267 "transport_retry_count": 4, 00:22:24.267 "bdev_retry_count": 3, 00:22:24.267 "transport_ack_timeout": 0, 00:22:24.267 "ctrlr_loss_timeout_sec": 0, 00:22:24.267 "reconnect_delay_sec": 0, 00:22:24.267 "fast_io_fail_timeout_sec": 0, 00:22:24.267 "disable_auto_failback": false, 00:22:24.267 "generate_uuids": false, 00:22:24.267 "transport_tos": 0, 00:22:24.267 "nvme_error_stat": false, 00:22:24.267 "rdma_srq_size": 0, 00:22:24.267 "io_path_stat": false, 00:22:24.267 "allow_accel_sequence": false, 00:22:24.267 "rdma_max_cq_size": 0, 00:22:24.267 "rdma_cm_event_timeout_ms": 0, 00:22:24.267 "dhchap_digests": [ 00:22:24.267 "sha256", 00:22:24.267 "sha384", 00:22:24.267 "sha512" 00:22:24.267 ], 00:22:24.267 "dhchap_dhgroups": [ 00:22:24.267 "null", 00:22:24.267 "ffdhe2048", 00:22:24.267 "ffdhe3072", 00:22:24.267 "ffdhe4096", 00:22:24.267 "ffdhe6144", 00:22:24.267 "ffdhe8192" 00:22:24.267 ] 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_nvme_attach_controller", 00:22:24.267 "params": { 00:22:24.267 "name": "nvme0", 00:22:24.267 "trtype": "TCP", 00:22:24.267 "adrfam": "IPv4", 00:22:24.267 "traddr": "10.0.0.2", 00:22:24.267 "trsvcid": "4420", 00:22:24.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.267 "prchk_reftag": false, 00:22:24.267 "prchk_guard": false, 00:22:24.267 "ctrlr_loss_timeout_sec": 0, 00:22:24.267 "reconnect_delay_sec": 0, 00:22:24.267 "fast_io_fail_timeout_sec": 0, 00:22:24.267 "psk": "key0", 00:22:24.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.267 "hdgst": false, 00:22:24.267 "ddgst": false 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_nvme_set_hotplug", 00:22:24.267 "params": { 00:22:24.267 "period_us": 100000, 00:22:24.267 "enable": false 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_enable_histogram", 00:22:24.267 "params": { 00:22:24.267 "name": "nvme0n1", 00:22:24.267 "enable": true 00:22:24.267 } 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "method": "bdev_wait_for_examine" 00:22:24.267 } 00:22:24.267 ] 00:22:24.267 }, 00:22:24.267 { 00:22:24.267 "subsystem": "nbd", 00:22:24.267 "config": [] 00:22:24.267 } 00:22:24.267 ] 00:22:24.267 }' 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1072090 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1072090 ']' 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1072090 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072090 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072090' 00:22:24.267 killing process with pid 1072090 00:22:24.267 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1072090 00:22:24.267 Received shutdown signal, test time was about 1.000000 seconds 00:22:24.267 00:22:24.267 Latency(us) 00:22:24.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.267 =================================================================================================================== 00:22:24.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.268 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1072090 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1072061 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1072061 ']' 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1072061 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072061 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072061' 00:22:24.525 killing process with pid 1072061 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1072061 00:22:24.525 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1072061 00:22:24.783 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:24.783 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.783 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:24.783 "subsystems": [ 00:22:24.783 { 00:22:24.783 "subsystem": "keyring", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "keyring_file_add_key", 00:22:24.783 "params": { 00:22:24.783 "name": "key0", 00:22:24.783 "path": "/tmp/tmp.p74jQDqwoH" 00:22:24.783 } 00:22:24.783 } 00:22:24.783 ] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "iobuf", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "iobuf_set_options", 00:22:24.783 "params": { 00:22:24.783 "small_pool_count": 8192, 00:22:24.783 "large_pool_count": 1024, 00:22:24.783 "small_bufsize": 8192, 00:22:24.783 "large_bufsize": 135168 00:22:24.783 } 00:22:24.783 } 00:22:24.783 ] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "sock", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "sock_set_default_impl", 00:22:24.783 "params": { 00:22:24.783 "impl_name": "posix" 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "sock_impl_set_options", 00:22:24.783 "params": { 00:22:24.783 "impl_name": "ssl", 00:22:24.783 "recv_buf_size": 4096, 00:22:24.783 "send_buf_size": 4096, 00:22:24.783 "enable_recv_pipe": true, 00:22:24.783 "enable_quickack": false, 00:22:24.783 "enable_placement_id": 0, 00:22:24.783 "enable_zerocopy_send_server": true, 00:22:24.783 "enable_zerocopy_send_client": false, 00:22:24.783 "zerocopy_threshold": 0, 00:22:24.783 "tls_version": 0, 00:22:24.783 "enable_ktls": false 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "sock_impl_set_options", 00:22:24.783 "params": { 00:22:24.783 "impl_name": "posix", 00:22:24.783 "recv_buf_size": 2097152, 00:22:24.783 "send_buf_size": 2097152, 00:22:24.783 "enable_recv_pipe": true, 00:22:24.783 "enable_quickack": false, 00:22:24.783 "enable_placement_id": 0, 00:22:24.783 "enable_zerocopy_send_server": true, 00:22:24.783 "enable_zerocopy_send_client": false, 00:22:24.783 "zerocopy_threshold": 0, 00:22:24.783 "tls_version": 0, 00:22:24.783 "enable_ktls": false 00:22:24.783 } 00:22:24.783 } 00:22:24.783 ] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "vmd", 00:22:24.783 "config": [] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "accel", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "accel_set_options", 00:22:24.783 "params": { 00:22:24.783 "small_cache_size": 128, 00:22:24.783 "large_cache_size": 16, 00:22:24.783 "task_count": 2048, 00:22:24.783 "sequence_count": 2048, 00:22:24.783 "buf_count": 2048 00:22:24.783 } 00:22:24.783 } 00:22:24.783 ] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "bdev", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "bdev_set_options", 00:22:24.783 "params": { 00:22:24.783 "bdev_io_pool_size": 65535, 00:22:24.783 "bdev_io_cache_size": 256, 00:22:24.783 "bdev_auto_examine": true, 00:22:24.783 "iobuf_small_cache_size": 128, 00:22:24.783 "iobuf_large_cache_size": 16 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_raid_set_options", 00:22:24.783 "params": { 00:22:24.783 "process_window_size_kb": 1024, 00:22:24.783 "process_max_bandwidth_mb_sec": 0 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_iscsi_set_options", 00:22:24.783 "params": { 00:22:24.783 "timeout_sec": 30 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_nvme_set_options", 00:22:24.783 "params": { 00:22:24.783 "action_on_timeout": "none", 00:22:24.783 "timeout_us": 0, 00:22:24.783 "timeout_admin_us": 0, 00:22:24.783 "keep_alive_timeout_ms": 10000, 00:22:24.783 "arbitration_burst": 0, 00:22:24.783 "low_priority_weight": 0, 00:22:24.783 "medium_priority_weight": 0, 00:22:24.783 "high_priority_weight": 0, 00:22:24.783 "nvme_adminq_poll_period_us": 10000, 00:22:24.783 "nvme_ioq_poll_period_us": 0, 00:22:24.783 "io_queue_requests": 0, 00:22:24.783 "delay_cmd_submit": true, 00:22:24.783 "transport_retry_count": 4, 00:22:24.783 "bdev_retry_count": 3, 00:22:24.783 "transport_ack_timeout": 0, 00:22:24.783 "ctrlr_loss_timeout_sec": 0, 00:22:24.783 "reconnect_delay_sec": 0, 00:22:24.783 "fast_io_fail_timeout_sec": 0, 00:22:24.783 "disable_auto_failback": false, 00:22:24.783 "generate_uuids": false, 00:22:24.783 "transport_tos": 0, 00:22:24.783 "nvme_error_stat": false, 00:22:24.783 "rdma_srq_size": 0, 00:22:24.783 "io_path_stat": false, 00:22:24.783 "allow_accel_sequence": false, 00:22:24.783 "rdma_max_cq_size": 0, 00:22:24.783 "rdma_cm_event_timeout_ms": 0, 00:22:24.783 "dhchap_digests": [ 00:22:24.783 "sha256", 00:22:24.783 "sha384", 00:22:24.783 "sha512" 00:22:24.783 ], 00:22:24.783 "dhchap_dhgroups": [ 00:22:24.783 "null", 00:22:24.783 "ffdhe2048", 00:22:24.783 "ffdhe3072", 00:22:24.783 "ffdhe4096", 00:22:24.783 "ffdhe6144", 00:22:24.783 "ffdhe8192" 00:22:24.783 ] 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_nvme_set_hotplug", 00:22:24.783 "params": { 00:22:24.783 "period_us": 100000, 00:22:24.783 "enable": false 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_malloc_create", 00:22:24.783 "params": { 00:22:24.783 "name": "malloc0", 00:22:24.783 "num_blocks": 8192, 00:22:24.783 "block_size": 4096, 00:22:24.783 "physical_block_size": 4096, 00:22:24.783 "uuid": "f0bfeb90-1b69-48d1-85a2-3ba0a2e9f391", 00:22:24.783 "optimal_io_boundary": 0, 00:22:24.783 "md_size": 0, 00:22:24.783 "dif_type": 0, 00:22:24.783 "dif_is_head_of_md": false, 00:22:24.783 "dif_pi_format": 0 00:22:24.783 } 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "method": "bdev_wait_for_examine" 00:22:24.783 } 00:22:24.783 ] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "nbd", 00:22:24.783 "config": [] 00:22:24.783 }, 00:22:24.783 { 00:22:24.783 "subsystem": "scheduler", 00:22:24.783 "config": [ 00:22:24.783 { 00:22:24.783 "method": "framework_set_scheduler", 00:22:24.783 "params": { 00:22:24.783 "name": "static" 00:22:24.783 } 00:22:24.783 } 00:22:24.784 ] 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "subsystem": "nvmf", 00:22:24.784 "config": [ 00:22:24.784 { 00:22:24.784 "method": "nvmf_set_config", 00:22:24.784 "params": { 00:22:24.784 "discovery_filter": "match_any", 00:22:24.784 "admin_cmd_passthru": { 00:22:24.784 "identify_ctrlr": false 00:22:24.784 } 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_set_max_subsystems", 00:22:24.784 "params": { 00:22:24.784 "max_subsystems": 1024 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_set_crdt", 00:22:24.784 "params": { 00:22:24.784 "crdt1": 0, 00:22:24.784 "crdt2": 0, 00:22:24.784 "crdt3": 0 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_create_transport", 00:22:24.784 "params": { 00:22:24.784 "trtype": "TCP", 00:22:24.784 "max_queue_depth": 128, 00:22:24.784 "max_io_qpairs_per_ctrlr": 127, 00:22:24.784 "in_capsule_data_size": 4096, 00:22:24.784 "max_io_size": 131072, 00:22:24.784 "io_unit_size": 131072, 00:22:24.784 "max_aq_depth": 128, 00:22:24.784 "num_shared_buffers": 511, 00:22:24.784 "buf_cache_size": 4294967295, 00:22:24.784 "dif_insert_or_strip": false, 00:22:24.784 "zcopy": false, 00:22:24.784 "c2h_success": false, 00:22:24.784 "sock_priority": 0, 00:22:24.784 "abort_timeout_sec": 1, 00:22:24.784 "ack_timeout": 0, 00:22:24.784 "data_wr_pool_size": 0 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_create_subsystem", 00:22:24.784 "params": { 00:22:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.784 "allow_any_host": false, 00:22:24.784 "serial_number": "00000000000000000000", 00:22:24.784 "model_number": "SPDK bdev Controller", 00:22:24.784 "max_namespaces": 32, 00:22:24.784 "min_cntlid": 1, 00:22:24.784 "max_cntlid": 65519, 00:22:24.784 "ana_reporting": false 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_subsystem_add_host", 00:22:24.784 "params": { 00:22:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.784 "host": "nqn.2016-06.io.spdk:host1", 00:22:24.784 "psk": "key0" 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_subsystem_add_ns", 00:22:24.784 "params": { 00:22:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.784 "namespace": { 00:22:24.784 "nsid": 1, 00:22:24.784 "bdev_name": "malloc0", 00:22:24.784 "nguid": "F0BFEB901B6948D185A23BA0A2E9F391", 00:22:24.784 "uuid": "f0bfeb90-1b69-48d1-85a2-3ba0a2e9f391", 00:22:24.784 "no_auto_visible": false 00:22:24.784 } 00:22:24.784 } 00:22:24.784 }, 00:22:24.784 { 00:22:24.784 "method": "nvmf_subsystem_add_listener", 00:22:24.784 "params": { 00:22:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.784 "listen_address": { 00:22:24.784 "trtype": "TCP", 00:22:24.784 "adrfam": "IPv4", 00:22:24.784 "traddr": "10.0.0.2", 00:22:24.784 "trsvcid": "4420" 00:22:24.784 }, 00:22:24.784 "secure_channel": false, 00:22:24.784 "sock_impl": "ssl" 00:22:24.784 } 00:22:24.784 } 00:22:24.784 ] 00:22:24.784 } 00:22:24.784 ] 00:22:24.784 }' 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1072491 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1072491 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1072491 ']' 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.784 02:21:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.784 [2024-07-27 02:21:52.831484] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:24.784 [2024-07-27 02:21:52.831561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.784 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.784 [2024-07-27 02:21:52.867921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:24.784 [2024-07-27 02:21:52.899696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.042 [2024-07-27 02:21:52.987555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.042 [2024-07-27 02:21:52.987622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.042 [2024-07-27 02:21:52.987636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.042 [2024-07-27 02:21:52.987661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.042 [2024-07-27 02:21:52.987671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.042 [2024-07-27 02:21:52.987740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.298 [2024-07-27 02:21:53.222973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.298 [2024-07-27 02:21:53.266642] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.298 [2024-07-27 02:21:53.266905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1072649 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1072649 /var/tmp/bdevperf.sock 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1072649 ']' 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.863 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:25.863 "subsystems": [ 00:22:25.863 { 00:22:25.863 "subsystem": "keyring", 00:22:25.863 "config": [ 00:22:25.863 { 00:22:25.863 "method": "keyring_file_add_key", 00:22:25.863 "params": { 00:22:25.863 "name": "key0", 00:22:25.863 "path": "/tmp/tmp.p74jQDqwoH" 00:22:25.863 } 00:22:25.863 } 00:22:25.863 ] 00:22:25.863 }, 00:22:25.863 { 00:22:25.863 "subsystem": "iobuf", 00:22:25.863 "config": [ 00:22:25.863 { 00:22:25.863 "method": "iobuf_set_options", 00:22:25.863 "params": { 00:22:25.863 "small_pool_count": 8192, 00:22:25.863 "large_pool_count": 1024, 00:22:25.863 "small_bufsize": 8192, 00:22:25.863 "large_bufsize": 135168 00:22:25.863 } 00:22:25.863 } 00:22:25.863 ] 00:22:25.863 }, 00:22:25.863 { 00:22:25.863 "subsystem": "sock", 00:22:25.863 "config": [ 00:22:25.863 { 00:22:25.863 "method": "sock_set_default_impl", 00:22:25.863 "params": { 00:22:25.863 "impl_name": "posix" 00:22:25.863 } 00:22:25.863 }, 00:22:25.863 { 00:22:25.863 "method": "sock_impl_set_options", 00:22:25.863 "params": { 00:22:25.863 "impl_name": "ssl", 00:22:25.863 "recv_buf_size": 4096, 00:22:25.863 "send_buf_size": 4096, 00:22:25.863 "enable_recv_pipe": true, 00:22:25.863 "enable_quickack": false, 00:22:25.863 "enable_placement_id": 0, 00:22:25.863 "enable_zerocopy_send_server": true, 00:22:25.863 "enable_zerocopy_send_client": false, 00:22:25.863 "zerocopy_threshold": 0, 00:22:25.863 "tls_version": 0, 00:22:25.863 "enable_ktls": false 00:22:25.863 } 00:22:25.863 }, 00:22:25.863 { 00:22:25.863 "method": "sock_impl_set_options", 00:22:25.863 "params": { 00:22:25.863 "impl_name": "posix", 00:22:25.863 "recv_buf_size": 2097152, 00:22:25.863 "send_buf_size": 2097152, 00:22:25.863 "enable_recv_pipe": true, 00:22:25.863 "enable_quickack": false, 00:22:25.863 "enable_placement_id": 0, 00:22:25.863 "enable_zerocopy_send_server": true, 00:22:25.863 "enable_zerocopy_send_client": false, 00:22:25.863 "zerocopy_threshold": 0, 00:22:25.863 "tls_version": 0, 00:22:25.863 "enable_ktls": false 00:22:25.863 } 00:22:25.863 } 00:22:25.863 ] 00:22:25.863 }, 00:22:25.863 { 00:22:25.863 "subsystem": "vmd", 00:22:25.863 "config": [] 00:22:25.863 }, 00:22:25.863 { 00:22:25.864 "subsystem": "accel", 00:22:25.864 "config": [ 00:22:25.864 { 00:22:25.864 "method": "accel_set_options", 00:22:25.864 "params": { 00:22:25.864 "small_cache_size": 128, 00:22:25.864 "large_cache_size": 16, 00:22:25.864 "task_count": 2048, 00:22:25.864 "sequence_count": 2048, 00:22:25.864 "buf_count": 2048 00:22:25.864 } 00:22:25.864 } 00:22:25.864 ] 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "subsystem": "bdev", 00:22:25.864 "config": [ 00:22:25.864 { 00:22:25.864 "method": "bdev_set_options", 00:22:25.864 "params": { 00:22:25.864 "bdev_io_pool_size": 65535, 00:22:25.864 "bdev_io_cache_size": 256, 00:22:25.864 "bdev_auto_examine": true, 00:22:25.864 "iobuf_small_cache_size": 128, 00:22:25.864 "iobuf_large_cache_size": 16 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_raid_set_options", 00:22:25.864 "params": { 00:22:25.864 "process_window_size_kb": 1024, 00:22:25.864 "process_max_bandwidth_mb_sec": 0 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_iscsi_set_options", 00:22:25.864 "params": { 00:22:25.864 "timeout_sec": 30 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_nvme_set_options", 00:22:25.864 "params": { 00:22:25.864 "action_on_timeout": "none", 00:22:25.864 "timeout_us": 0, 00:22:25.864 "timeout_admin_us": 0, 00:22:25.864 "keep_alive_timeout_ms": 10000, 00:22:25.864 "arbitration_burst": 0, 00:22:25.864 "low_priority_weight": 0, 00:22:25.864 "medium_priority_weight": 0, 00:22:25.864 "high_priority_weight": 0, 00:22:25.864 "nvme_adminq_poll_period_us": 10000, 00:22:25.864 "nvme_ioq_poll_period_us": 0, 00:22:25.864 "io_queue_requests": 512, 00:22:25.864 "delay_cmd_submit": true, 00:22:25.864 "transport_retry_count": 4, 00:22:25.864 "bdev_retry_count": 3, 00:22:25.864 "transport_ack_timeout": 0, 00:22:25.864 "ctrlr_loss_timeout_sec": 0, 00:22:25.864 "reconnect_delay_sec": 0, 00:22:25.864 "fast_io_fail_timeout_sec": 0, 00:22:25.864 "disable_auto_failback": false, 00:22:25.864 "generate_uuids": false, 00:22:25.864 "transport_tos": 0, 00:22:25.864 "nvme_error_stat": false, 00:22:25.864 "rdma_srq_size": 0, 00:22:25.864 "io_path_stat": false, 00:22:25.864 "allow_accel_sequence": false, 00:22:25.864 "rdma_max_cq_size": 0, 00:22:25.864 "rdma_cm_event_timeout_ms": 0, 00:22:25.864 "dhchap_digests": [ 00:22:25.864 "sha256", 00:22:25.864 "sha384", 00:22:25.864 "sha512" 00:22:25.864 ], 00:22:25.864 "dhchap_dhgroups": [ 00:22:25.864 "null", 00:22:25.864 "ffdhe2048", 00:22:25.864 "ffdhe3072", 00:22:25.864 "ffdhe4096", 00:22:25.864 "ffdhe6144", 00:22:25.864 "ffdhe8192" 00:22:25.864 ] 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_nvme_attach_controller", 00:22:25.864 "params": { 00:22:25.864 "name": "nvme0", 00:22:25.864 "trtype": "TCP", 00:22:25.864 "adrfam": "IPv4", 00:22:25.864 "traddr": "10.0.0.2", 00:22:25.864 "trsvcid": "4420", 00:22:25.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.864 "prchk_reftag": false, 00:22:25.864 "prchk_guard": false, 00:22:25.864 "ctrlr_loss_timeout_sec": 0, 00:22:25.864 "reconnect_delay_sec": 0, 00:22:25.864 "fast_io_fail_timeout_sec": 0, 00:22:25.864 "psk": "key0", 00:22:25.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.864 "hdgst": false, 00:22:25.864 "ddgst": false 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_nvme_set_hotplug", 00:22:25.864 "params": { 00:22:25.864 "period_us": 100000, 00:22:25.864 "enable": false 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_enable_histogram", 00:22:25.864 "params": { 00:22:25.864 "name": "nvme0n1", 00:22:25.864 "enable": true 00:22:25.864 } 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "method": "bdev_wait_for_examine" 00:22:25.864 } 00:22:25.864 ] 00:22:25.864 }, 00:22:25.864 { 00:22:25.864 "subsystem": "nbd", 00:22:25.864 "config": [] 00:22:25.864 } 00:22:25.864 ] 00:22:25.864 }' 00:22:25.864 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.864 02:21:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.864 [2024-07-27 02:21:53.890659] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:25.864 [2024-07-27 02:21:53.890731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072649 ] 00:22:25.864 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.864 [2024-07-27 02:21:53.921812] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:25.864 [2024-07-27 02:21:53.952882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.122 [2024-07-27 02:21:54.043623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.122 [2024-07-27 02:21:54.223299] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.054 02:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.054 02:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.054 02:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.054 02:21:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:27.054 02:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.054 02:21:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.054 Running I/O for 1 seconds... 00:22:28.436 00:22:28.436 Latency(us) 00:22:28.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.436 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.436 Verification LBA range: start 0x0 length 0x2000 00:22:28.436 nvme0n1 : 1.06 1663.31 6.50 0.00 0.00 75078.08 10145.94 107187.77 00:22:28.437 =================================================================================================================== 00:22:28.437 Total : 1663.31 6.50 0.00 0.00 75078.08 10145.94 107187.77 00:22:28.437 0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:28.437 nvmf_trace.0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1072649 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1072649 ']' 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1072649 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072649 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072649' 00:22:28.437 killing process with pid 1072649 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1072649 00:22:28.437 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.437 00:22:28.437 Latency(us) 00:22:28.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.437 =================================================================================================================== 00:22:28.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.437 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1072649 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.695 rmmod nvme_tcp 00:22:28.695 rmmod nvme_fabrics 00:22:28.695 rmmod nvme_keyring 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1072491 ']' 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1072491 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1072491 ']' 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1072491 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072491 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072491' 00:22:28.695 killing process with pid 1072491 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1072491 00:22:28.695 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1072491 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.953 02:21:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.859 02:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.859 02:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UX3Fa9SeDJ /tmp/tmp.SzWOJcS0ts /tmp/tmp.p74jQDqwoH 00:22:30.859 00:22:30.859 real 1m19.515s 00:22:30.859 user 2m5.972s 00:22:30.859 sys 0m28.857s 00:22:30.859 02:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.859 02:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.859 ************************************ 00:22:30.859 END TEST nvmf_tls 00:22:30.859 ************************************ 00:22:30.859 02:21:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:30.859 02:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.859 02:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.859 02:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:31.119 ************************************ 00:22:31.119 START TEST nvmf_fips 00:22:31.119 ************************************ 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:31.119 * Looking for test storage... 00:22:31.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:31.119 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:31.120 Error setting digest 00:22:31.120 00A21427067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:31.120 00A21427067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.120 02:21:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.039 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.040 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:22:33.345 00:22:33.345 --- 10.0.0.2 ping statistics --- 00:22:33.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.345 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:33.345 00:22:33.345 --- 10.0.0.1 ping statistics --- 00:22:33.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.345 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1074889 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1074889 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1074889 ']' 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.345 02:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:33.345 [2024-07-27 02:22:01.396929] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:33.345 [2024-07-27 02:22:01.396997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.345 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.345 [2024-07-27 02:22:01.432889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.345 [2024-07-27 02:22:01.463170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.611 [2024-07-27 02:22:01.553317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.611 [2024-07-27 02:22:01.553379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.611 [2024-07-27 02:22:01.553406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.611 [2024-07-27 02:22:01.553421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.611 [2024-07-27 02:22:01.553434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.612 [2024-07-27 02:22:01.553463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:34.178 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:34.436 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:34.436 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:34.436 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.694 [2024-07-27 02:22:02.612245] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.694 [2024-07-27 02:22:02.628242] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.694 [2024-07-27 02:22:02.628497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.694 [2024-07-27 02:22:02.660184] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:34.694 malloc0 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1075044 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1075044 /var/tmp/bdevperf.sock 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1075044 ']' 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.694 02:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:34.694 [2024-07-27 02:22:02.755335] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:22:34.694 [2024-07-27 02:22:02.755439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075044 ] 00:22:34.694 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.694 [2024-07-27 02:22:02.790577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:34.694 [2024-07-27 02:22:02.819237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.953 [2024-07-27 02:22:02.907216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.953 02:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.953 02:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:34.953 02:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:35.212 [2024-07-27 02:22:03.233322] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.212 [2024-07-27 02:22:03.233439] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:35.212 TLSTESTn1 00:22:35.212 02:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.469 Running I/O for 10 seconds... 00:22:45.424 00:22:45.424 Latency(us) 00:22:45.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.424 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:45.424 Verification LBA range: start 0x0 length 0x2000 00:22:45.424 TLSTESTn1 : 10.06 1792.25 7.00 0.00 0.00 71203.49 6165.24 100197.26 00:22:45.424 =================================================================================================================== 00:22:45.424 Total : 1792.25 7.00 0.00 0.00 71203.49 6165.24 100197.26 00:22:45.424 0 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:45.424 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:45.424 nvmf_trace.0 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1075044 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1075044 ']' 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1075044 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1075044 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1075044' 00:22:45.682 killing process with pid 1075044 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1075044 00:22:45.682 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.682 00:22:45.682 Latency(us) 00:22:45.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.682 =================================================================================================================== 00:22:45.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.682 [2024-07-27 02:22:13.629776] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1075044 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.682 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.682 rmmod nvme_tcp 00:22:45.940 rmmod nvme_fabrics 00:22:45.940 rmmod nvme_keyring 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1074889 ']' 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1074889 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1074889 ']' 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1074889 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1074889 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1074889' 00:22:45.940 killing process with pid 1074889 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1074889 00:22:45.940 [2024-07-27 02:22:13.927913] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:45.940 02:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1074889 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.198 02:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.100 00:22:48.100 real 0m17.168s 00:22:48.100 user 0m20.995s 00:22:48.100 sys 0m6.612s 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:48.100 ************************************ 00:22:48.100 END TEST nvmf_fips 00:22:48.100 ************************************ 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:48.100 ************************************ 00:22:48.100 START TEST nvmf_fuzz 00:22:48.100 ************************************ 00:22:48.100 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:48.358 * Looking for test storage... 00:22:48.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.358 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.359 02:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.261 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:50.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:50.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:50.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:50.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:22:50.262 00:22:50.262 --- 10.0.0.2 ping statistics --- 00:22:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.262 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:50.262 00:22:50.262 --- 10.0.0.1 ping statistics --- 00:22:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.262 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.262 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1078505 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1078505 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1078505 ']' 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.521 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.779 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 Malloc0 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:50.780 02:22:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:22.875 Fuzzing completed. Shutting down the fuzz application 00:23:22.875 00:23:22.875 Dumping successful admin opcodes: 00:23:22.875 8, 9, 10, 24, 00:23:22.875 Dumping successful io opcodes: 00:23:22.875 0, 9, 00:23:22.875 NS: 0x200003aeff00 I/O qp, Total commands completed: 465304, total successful commands: 2691, random_seed: 3094403136 00:23:22.875 NS: 0x200003aeff00 admin qp, Total commands completed: 58272, total successful commands: 464, random_seed: 1693135936 00:23:22.875 02:22:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:22.875 Fuzzing completed. Shutting down the fuzz application 00:23:22.875 00:23:22.875 Dumping successful admin opcodes: 00:23:22.875 24, 00:23:22.875 Dumping successful io opcodes: 00:23:22.875 00:23:22.875 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 159173151 00:23:22.875 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 159381760 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.875 rmmod nvme_tcp 00:23:22.875 rmmod nvme_fabrics 00:23:22.875 rmmod nvme_keyring 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1078505 ']' 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1078505 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1078505 ']' 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1078505 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1078505 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1078505' 00:23:22.875 killing process with pid 1078505 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1078505 00:23:22.875 02:22:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1078505 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.135 02:22:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:25.040 00:23:25.040 real 0m36.911s 00:23:25.040 user 0m50.595s 00:23:25.040 sys 0m15.374s 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:25.040 ************************************ 00:23:25.040 END TEST nvmf_fuzz 00:23:25.040 ************************************ 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.040 02:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.298 ************************************ 00:23:25.298 START TEST nvmf_multiconnection 00:23:25.298 ************************************ 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:25.298 * Looking for test storage... 00:23:25.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.298 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.299 02:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:27.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:27.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:27.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.198 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:27.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:23:27.199 00:23:27.199 --- 10.0.0.2 ping statistics --- 00:23:27.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.199 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:23:27.199 00:23:27.199 --- 10.0.0.1 ping statistics --- 00:23:27.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.199 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1084241 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1084241 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1084241 ']' 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.199 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 [2024-07-27 02:22:55.330470] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:23:27.199 [2024-07-27 02:22:55.330545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.458 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.458 [2024-07-27 02:22:55.368833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:27.458 [2024-07-27 02:22:55.396660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.458 [2024-07-27 02:22:55.482493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.458 [2024-07-27 02:22:55.482545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.458 [2024-07-27 02:22:55.482569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.458 [2024-07-27 02:22:55.482581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.458 [2024-07-27 02:22:55.482592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.458 [2024-07-27 02:22:55.482655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.458 [2024-07-27 02:22:55.482712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.458 [2024-07-27 02:22:55.482789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.458 [2024-07-27 02:22:55.482791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.458 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.458 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:27.458 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.458 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.458 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 [2024-07-27 02:22:55.638629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 Malloc1 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 [2024-07-27 02:22:55.695964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 Malloc2 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.717 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 Malloc3 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 Malloc4 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 Malloc5 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.718 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 Malloc6 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.977 Malloc7 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:27.977 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 Malloc8 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 Malloc9 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 Malloc10 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.978 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.236 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.236 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:28.236 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.236 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.236 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.237 Malloc11 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:28.237 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:28.802 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:28.802 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:28.802 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.802 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:28.802 02:22:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:31.328 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:31.328 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:31.328 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:31.328 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:31.329 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:31.329 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:31.329 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.329 02:22:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:31.586 02:22:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:31.586 02:22:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.586 02:22:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.586 02:22:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:31.586 02:22:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.483 02:23:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:34.415 02:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:34.415 02:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.415 02:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:34.415 02:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:34.415 02:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:36.308 02:23:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:37.239 02:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:37.239 02:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:37.239 02:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:37.239 02:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:37.239 02:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:39.135 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:39.701 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:39.701 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.701 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.701 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:39.701 02:23:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.255 02:23:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:42.512 02:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:42.512 02:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:42.512 02:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:42.512 02:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:42.512 02:23:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.035 02:23:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:45.296 02:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:45.296 02:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.296 02:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.296 02:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:45.296 02:23:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.818 02:23:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:48.382 02:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:48.382 02:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:48.382 02:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.382 02:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:48.382 02:23:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.276 02:23:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:51.209 02:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:51.209 02:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:51.209 02:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:51.209 02:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:51.209 02:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.106 02:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:54.039 02:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:54.039 02:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:54.039 02:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:54.039 02:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:54.039 02:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.934 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.934 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.934 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:55.935 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.935 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.935 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.935 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.935 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:56.866 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:56.866 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:56.866 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:56.866 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:56.866 02:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:59.392 02:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:59.392 [global] 00:23:59.392 thread=1 00:23:59.392 invalidate=1 00:23:59.392 rw=read 00:23:59.392 time_based=1 00:23:59.392 runtime=10 00:23:59.392 ioengine=libaio 00:23:59.392 direct=1 00:23:59.392 bs=262144 00:23:59.392 iodepth=64 00:23:59.392 norandommap=1 00:23:59.392 numjobs=1 00:23:59.392 00:23:59.392 [job0] 00:23:59.392 filename=/dev/nvme0n1 00:23:59.392 [job1] 00:23:59.392 filename=/dev/nvme10n1 00:23:59.392 [job2] 00:23:59.392 filename=/dev/nvme1n1 00:23:59.392 [job3] 00:23:59.392 filename=/dev/nvme2n1 00:23:59.392 [job4] 00:23:59.392 filename=/dev/nvme3n1 00:23:59.392 [job5] 00:23:59.392 filename=/dev/nvme4n1 00:23:59.392 [job6] 00:23:59.392 filename=/dev/nvme5n1 00:23:59.392 [job7] 00:23:59.392 filename=/dev/nvme6n1 00:23:59.392 [job8] 00:23:59.392 filename=/dev/nvme7n1 00:23:59.392 [job9] 00:23:59.392 filename=/dev/nvme8n1 00:23:59.392 [job10] 00:23:59.392 filename=/dev/nvme9n1 00:23:59.392 Could not set queue depth (nvme0n1) 00:23:59.392 Could not set queue depth (nvme10n1) 00:23:59.392 Could not set queue depth (nvme1n1) 00:23:59.392 Could not set queue depth (nvme2n1) 00:23:59.392 Could not set queue depth (nvme3n1) 00:23:59.392 Could not set queue depth (nvme4n1) 00:23:59.392 Could not set queue depth (nvme5n1) 00:23:59.392 Could not set queue depth (nvme6n1) 00:23:59.392 Could not set queue depth (nvme7n1) 00:23:59.392 Could not set queue depth (nvme8n1) 00:23:59.392 Could not set queue depth (nvme9n1) 00:23:59.392 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.392 fio-3.35 00:23:59.392 Starting 11 threads 00:24:11.633 00:24:11.633 job0: (groupid=0, jobs=1): err= 0: pid=1088502: Sat Jul 27 02:23:37 2024 00:24:11.633 read: IOPS=774, BW=194MiB/s (203MB/s)(1962MiB/10130msec) 00:24:11.633 slat (usec): min=13, max=92548, avg=1135.96, stdev=4300.67 00:24:11.633 clat (msec): min=2, max=304, avg=81.38, stdev=53.83 00:24:11.633 lat (msec): min=2, max=304, avg=82.52, stdev=54.62 00:24:11.633 clat percentiles (msec): 00:24:11.633 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 37], 00:24:11.633 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 68], 60.00th=[ 80], 00:24:11.633 | 70.00th=[ 103], 80.00th=[ 131], 90.00th=[ 165], 95.00th=[ 188], 00:24:11.633 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 296], 99.95th=[ 300], 00:24:11.633 | 99.99th=[ 305] 00:24:11.633 bw ( KiB/s): min=84480, max=433664, per=11.57%, avg=199273.50, stdev=97377.07, samples=20 00:24:11.633 iops : min= 330, max= 1694, avg=778.35, stdev=380.39, samples=20 00:24:11.633 lat (msec) : 4=0.68%, 10=2.05%, 20=3.21%, 50=33.24%, 100=30.32% 00:24:11.633 lat (msec) : 250=30.11%, 500=0.39% 00:24:11.633 cpu : usr=0.45%, sys=2.50%, ctx=1774, majf=0, minf=4097 00:24:11.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=7849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job1: (groupid=0, jobs=1): err= 0: pid=1088503: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=673, BW=168MiB/s (177MB/s)(1696MiB/10071msec) 00:24:11.634 slat (usec): min=10, max=98825, avg=1186.25, stdev=4249.45 00:24:11.634 clat (msec): min=3, max=230, avg=93.76, stdev=47.69 00:24:11.634 lat (msec): min=3, max=277, avg=94.95, stdev=48.22 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 44], 00:24:11.634 | 30.00th=[ 59], 40.00th=[ 77], 50.00th=[ 94], 60.00th=[ 109], 00:24:11.634 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 174], 00:24:11.634 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 222], 99.95th=[ 230], 00:24:11.634 | 99.99th=[ 230] 00:24:11.634 bw ( KiB/s): min=102400, max=386560, per=9.99%, avg=172006.10, stdev=68264.16, samples=20 00:24:11.634 iops : min= 400, max= 1510, avg=671.85, stdev=266.68, samples=20 00:24:11.634 lat (msec) : 4=0.10%, 10=1.56%, 20=2.33%, 50=22.88%, 100=27.36% 00:24:11.634 lat (msec) : 250=45.76% 00:24:11.634 cpu : usr=0.51%, sys=2.29%, ctx=1565, majf=0, minf=4097 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=6783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job2: (groupid=0, jobs=1): err= 0: pid=1088504: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=646, BW=162MiB/s (169MB/s)(1627MiB/10069msec) 00:24:11.634 slat (usec): min=12, max=54472, avg=1429.74, stdev=3881.35 00:24:11.634 clat (msec): min=8, max=207, avg=97.51, stdev=29.09 00:24:11.634 lat (msec): min=8, max=207, avg=98.94, stdev=29.58 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 31], 5.00th=[ 49], 10.00th=[ 62], 20.00th=[ 75], 00:24:11.634 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 103], 00:24:11.634 | 70.00th=[ 111], 80.00th=[ 124], 90.00th=[ 136], 95.00th=[ 146], 00:24:11.634 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 199], 99.95th=[ 199], 00:24:11.634 | 99.99th=[ 207] 00:24:11.634 bw ( KiB/s): min=114176, max=237568, per=9.58%, avg=164952.80, stdev=36546.77, samples=20 00:24:11.634 iops : min= 446, max= 928, avg=644.30, stdev=142.82, samples=20 00:24:11.634 lat (msec) : 10=0.05%, 20=0.35%, 50=5.12%, 100=51.21%, 250=43.28% 00:24:11.634 cpu : usr=0.56%, sys=2.27%, ctx=1461, majf=0, minf=4097 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=6507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job3: (groupid=0, jobs=1): err= 0: pid=1088505: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=579, BW=145MiB/s (152MB/s)(1475MiB/10186msec) 00:24:11.634 slat (usec): min=9, max=141209, avg=1136.75, stdev=5373.49 00:24:11.634 clat (msec): min=2, max=404, avg=109.32, stdev=54.60 00:24:11.634 lat (msec): min=2, max=404, avg=110.46, stdev=55.46 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 63], 00:24:11.634 | 30.00th=[ 80], 40.00th=[ 95], 50.00th=[ 109], 60.00th=[ 121], 00:24:11.634 | 70.00th=[ 138], 80.00th=[ 155], 90.00th=[ 186], 95.00th=[ 205], 00:24:11.634 | 99.00th=[ 236], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 359], 00:24:11.634 | 99.99th=[ 405] 00:24:11.634 bw ( KiB/s): min=79872, max=263168, per=8.67%, avg=149323.05, stdev=55327.34, samples=20 00:24:11.634 iops : min= 312, max= 1028, avg=583.25, stdev=216.09, samples=20 00:24:11.634 lat (msec) : 4=0.19%, 10=1.73%, 20=2.92%, 50=10.48%, 100=28.84% 00:24:11.634 lat (msec) : 250=55.39%, 500=0.46% 00:24:11.634 cpu : usr=0.28%, sys=1.74%, ctx=1553, majf=0, minf=4097 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=5898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job4: (groupid=0, jobs=1): err= 0: pid=1088506: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=660, BW=165MiB/s (173MB/s)(1674MiB/10136msec) 00:24:11.634 slat (usec): min=9, max=126837, avg=847.21, stdev=4424.92 00:24:11.634 clat (usec): min=1662, max=344904, avg=95903.42, stdev=56093.27 00:24:11.634 lat (usec): min=1735, max=344944, avg=96750.64, stdev=56652.81 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 35], 20.00th=[ 46], 00:24:11.634 | 30.00th=[ 62], 40.00th=[ 74], 50.00th=[ 89], 60.00th=[ 103], 00:24:11.634 | 70.00th=[ 117], 80.00th=[ 134], 90.00th=[ 184], 95.00th=[ 207], 00:24:11.634 | 99.00th=[ 236], 99.50th=[ 264], 99.90th=[ 334], 99.95th=[ 342], 00:24:11.634 | 99.99th=[ 347] 00:24:11.634 bw ( KiB/s): min=73728, max=386048, per=9.86%, avg=169853.95, stdev=69882.50, samples=20 00:24:11.634 iops : min= 288, max= 1508, avg=663.45, stdev=272.94, samples=20 00:24:11.634 lat (msec) : 2=0.03%, 4=0.60%, 10=1.82%, 20=3.03%, 50=17.29% 00:24:11.634 lat (msec) : 100=35.18%, 250=41.39%, 500=0.66% 00:24:11.634 cpu : usr=0.41%, sys=2.07%, ctx=1886, majf=0, minf=4097 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=6697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job5: (groupid=0, jobs=1): err= 0: pid=1088511: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=706, BW=177MiB/s (185MB/s)(1780MiB/10069msec) 00:24:11.634 slat (usec): min=10, max=106182, avg=784.84, stdev=3665.03 00:24:11.634 clat (usec): min=1231, max=284717, avg=89677.77, stdev=46484.86 00:24:11.634 lat (usec): min=1251, max=284765, avg=90462.61, stdev=46869.08 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 47], 00:24:11.634 | 30.00th=[ 59], 40.00th=[ 75], 50.00th=[ 88], 60.00th=[ 102], 00:24:11.634 | 70.00th=[ 114], 80.00th=[ 129], 90.00th=[ 153], 95.00th=[ 169], 00:24:11.634 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 243], 00:24:11.634 | 99.99th=[ 284] 00:24:11.634 bw ( KiB/s): min=101376, max=285184, per=10.49%, avg=180573.15, stdev=51536.73, samples=20 00:24:11.634 iops : min= 396, max= 1114, avg=705.30, stdev=201.35, samples=20 00:24:11.634 lat (msec) : 2=0.21%, 4=0.15%, 10=1.46%, 20=4.16%, 50=16.04% 00:24:11.634 lat (msec) : 100=36.96%, 250=40.99%, 500=0.01% 00:24:11.634 cpu : usr=0.34%, sys=2.17%, ctx=1936, majf=0, minf=3721 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=7118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job6: (groupid=0, jobs=1): err= 0: pid=1088512: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=489, BW=122MiB/s (128MB/s)(1241MiB/10146msec) 00:24:11.634 slat (usec): min=13, max=68882, avg=1967.97, stdev=5342.39 00:24:11.634 clat (msec): min=54, max=342, avg=128.73, stdev=43.48 00:24:11.634 lat (msec): min=54, max=350, avg=130.70, stdev=44.12 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 65], 5.00th=[ 72], 10.00th=[ 79], 20.00th=[ 86], 00:24:11.634 | 30.00th=[ 97], 40.00th=[ 110], 50.00th=[ 123], 60.00th=[ 140], 00:24:11.634 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 201], 00:24:11.634 | 99.00th=[ 239], 99.50th=[ 279], 99.90th=[ 338], 99.95th=[ 342], 00:24:11.634 | 99.99th=[ 342] 00:24:11.634 bw ( KiB/s): min=78336, max=197632, per=7.28%, avg=125388.60, stdev=35928.90, samples=20 00:24:11.634 iops : min= 306, max= 772, avg=489.75, stdev=140.32, samples=20 00:24:11.634 lat (msec) : 100=31.39%, 250=68.00%, 500=0.60% 00:24:11.634 cpu : usr=0.29%, sys=1.85%, ctx=1105, majf=0, minf=4097 00:24:11.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:11.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.634 issued rwts: total=4963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.634 job7: (groupid=0, jobs=1): err= 0: pid=1088513: Sat Jul 27 02:23:37 2024 00:24:11.634 read: IOPS=589, BW=147MiB/s (154MB/s)(1484MiB/10074msec) 00:24:11.634 slat (usec): min=9, max=49371, avg=776.33, stdev=3182.01 00:24:11.634 clat (usec): min=1213, max=269709, avg=107763.24, stdev=45865.86 00:24:11.634 lat (usec): min=1234, max=269737, avg=108539.57, stdev=46097.13 00:24:11.634 clat percentiles (msec): 00:24:11.634 | 1.00th=[ 17], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 69], 00:24:11.634 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 114], 00:24:11.634 | 70.00th=[ 130], 80.00th=[ 146], 90.00th=[ 169], 95.00th=[ 190], 00:24:11.634 | 99.00th=[ 226], 99.50th=[ 249], 99.90th=[ 271], 99.95th=[ 271], 00:24:11.635 | 99.99th=[ 271] 00:24:11.635 bw ( KiB/s): min=91648, max=235520, per=8.73%, avg=150315.05, stdev=38006.21, samples=20 00:24:11.635 iops : min= 358, max= 920, avg=587.10, stdev=148.47, samples=20 00:24:11.635 lat (msec) : 2=0.07%, 4=0.02%, 10=0.49%, 20=0.67%, 50=8.73% 00:24:11.635 lat (msec) : 100=37.62%, 250=51.95%, 500=0.45% 00:24:11.635 cpu : usr=0.35%, sys=1.88%, ctx=1740, majf=0, minf=4097 00:24:11.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:11.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.635 issued rwts: total=5935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.635 job8: (groupid=0, jobs=1): err= 0: pid=1088516: Sat Jul 27 02:23:37 2024 00:24:11.635 read: IOPS=496, BW=124MiB/s (130MB/s)(1250MiB/10069msec) 00:24:11.635 slat (usec): min=11, max=68077, avg=1909.29, stdev=5226.60 00:24:11.635 clat (msec): min=11, max=250, avg=126.90, stdev=46.51 00:24:11.635 lat (msec): min=11, max=269, avg=128.81, stdev=47.17 00:24:11.635 clat percentiles (msec): 00:24:11.635 | 1.00th=[ 21], 5.00th=[ 57], 10.00th=[ 69], 20.00th=[ 83], 00:24:11.635 | 30.00th=[ 103], 40.00th=[ 113], 50.00th=[ 127], 60.00th=[ 140], 00:24:11.635 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 188], 95.00th=[ 203], 00:24:11.635 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 245], 99.95th=[ 247], 00:24:11.635 | 99.99th=[ 251] 00:24:11.635 bw ( KiB/s): min=81920, max=217088, per=7.34%, avg=126374.50, stdev=38568.80, samples=20 00:24:11.635 iops : min= 320, max= 848, avg=493.65, stdev=150.66, samples=20 00:24:11.635 lat (msec) : 20=0.86%, 50=3.32%, 100=24.56%, 250=71.23%, 500=0.02% 00:24:11.635 cpu : usr=0.32%, sys=1.97%, ctx=1132, majf=0, minf=4097 00:24:11.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:11.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.635 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.635 job9: (groupid=0, jobs=1): err= 0: pid=1088517: Sat Jul 27 02:23:37 2024 00:24:11.635 read: IOPS=521, BW=130MiB/s (137MB/s)(1323MiB/10151msec) 00:24:11.635 slat (usec): min=9, max=120096, avg=1624.57, stdev=5043.99 00:24:11.635 clat (msec): min=23, max=320, avg=121.03, stdev=38.47 00:24:11.635 lat (msec): min=23, max=440, avg=122.65, stdev=39.03 00:24:11.635 clat percentiles (msec): 00:24:11.635 | 1.00th=[ 44], 5.00th=[ 71], 10.00th=[ 84], 20.00th=[ 94], 00:24:11.635 | 30.00th=[ 102], 40.00th=[ 108], 50.00th=[ 114], 60.00th=[ 122], 00:24:11.635 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 163], 95.00th=[ 199], 00:24:11.635 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 309], 99.95th=[ 321], 00:24:11.635 | 99.99th=[ 321] 00:24:11.635 bw ( KiB/s): min=71680, max=190464, per=7.77%, avg=133812.60, stdev=26912.79, samples=20 00:24:11.635 iops : min= 280, max= 744, avg=522.65, stdev=105.14, samples=20 00:24:11.635 lat (msec) : 50=1.28%, 100=26.42%, 250=71.33%, 500=0.96% 00:24:11.635 cpu : usr=0.38%, sys=1.81%, ctx=1282, majf=0, minf=4097 00:24:11.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:11.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.635 issued rwts: total=5292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.635 job10: (groupid=0, jobs=1): err= 0: pid=1088518: Sat Jul 27 02:23:37 2024 00:24:11.635 read: IOPS=642, BW=161MiB/s (168MB/s)(1618MiB/10071msec) 00:24:11.635 slat (usec): min=9, max=122075, avg=976.34, stdev=4259.46 00:24:11.635 clat (usec): min=1150, max=292373, avg=98517.52, stdev=51455.00 00:24:11.635 lat (usec): min=1176, max=292401, avg=99493.87, stdev=51976.09 00:24:11.635 clat percentiles (msec): 00:24:11.635 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 56], 00:24:11.635 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 107], 00:24:11.635 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 167], 95.00th=[ 199], 00:24:11.635 | 99.00th=[ 239], 99.50th=[ 266], 99.90th=[ 288], 99.95th=[ 288], 00:24:11.635 | 99.99th=[ 292] 00:24:11.635 bw ( KiB/s): min=79872, max=242176, per=9.53%, avg=164065.10, stdev=47097.99, samples=20 00:24:11.635 iops : min= 312, max= 946, avg=640.80, stdev=184.00, samples=20 00:24:11.635 lat (msec) : 2=0.22%, 4=0.37%, 10=2.13%, 20=2.81%, 50=11.83% 00:24:11.635 lat (msec) : 100=36.15%, 250=45.88%, 500=0.60% 00:24:11.635 cpu : usr=0.23%, sys=2.21%, ctx=1722, majf=0, minf=4097 00:24:11.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:11.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:11.635 issued rwts: total=6473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:11.635 00:24:11.635 Run status group 0 (all jobs): 00:24:11.635 READ: bw=1682MiB/s (1763MB/s), 122MiB/s-194MiB/s (128MB/s-203MB/s), io=16.7GiB (18.0GB), run=10069-10186msec 00:24:11.635 00:24:11.635 Disk stats (read/write): 00:24:11.635 nvme0n1: ios=15537/0, merge=0/0, ticks=1229391/0, in_queue=1229391, util=96.95% 00:24:11.635 nvme10n1: ios=13334/0, merge=0/0, ticks=1233238/0, in_queue=1233238, util=97.19% 00:24:11.635 nvme1n1: ios=12650/0, merge=0/0, ticks=1232219/0, in_queue=1232219, util=97.46% 00:24:11.635 nvme2n1: ios=11795/0, merge=0/0, ticks=1270920/0, in_queue=1270920, util=97.71% 00:24:11.635 nvme3n1: ios=13216/0, merge=0/0, ticks=1229286/0, in_queue=1229286, util=97.73% 00:24:11.635 nvme4n1: ios=13929/0, merge=0/0, ticks=1242614/0, in_queue=1242614, util=98.15% 00:24:11.635 nvme5n1: ios=9729/0, merge=0/0, ticks=1216410/0, in_queue=1216410, util=98.33% 00:24:11.635 nvme6n1: ios=11626/0, merge=0/0, ticks=1243953/0, in_queue=1243953, util=98.46% 00:24:11.635 nvme7n1: ios=9703/0, merge=0/0, ticks=1229528/0, in_queue=1229528, util=98.89% 00:24:11.635 nvme8n1: ios=10436/0, merge=0/0, ticks=1227159/0, in_queue=1227159, util=99.10% 00:24:11.635 nvme9n1: ios=12675/0, merge=0/0, ticks=1236674/0, in_queue=1236674, util=99.22% 00:24:11.635 02:23:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:11.635 [global] 00:24:11.635 thread=1 00:24:11.635 invalidate=1 00:24:11.635 rw=randwrite 00:24:11.635 time_based=1 00:24:11.635 runtime=10 00:24:11.635 ioengine=libaio 00:24:11.635 direct=1 00:24:11.635 bs=262144 00:24:11.635 iodepth=64 00:24:11.635 norandommap=1 00:24:11.635 numjobs=1 00:24:11.635 00:24:11.635 [job0] 00:24:11.635 filename=/dev/nvme0n1 00:24:11.635 [job1] 00:24:11.635 filename=/dev/nvme10n1 00:24:11.635 [job2] 00:24:11.635 filename=/dev/nvme1n1 00:24:11.635 [job3] 00:24:11.635 filename=/dev/nvme2n1 00:24:11.635 [job4] 00:24:11.635 filename=/dev/nvme3n1 00:24:11.635 [job5] 00:24:11.635 filename=/dev/nvme4n1 00:24:11.635 [job6] 00:24:11.635 filename=/dev/nvme5n1 00:24:11.635 [job7] 00:24:11.635 filename=/dev/nvme6n1 00:24:11.635 [job8] 00:24:11.635 filename=/dev/nvme7n1 00:24:11.635 [job9] 00:24:11.635 filename=/dev/nvme8n1 00:24:11.635 [job10] 00:24:11.635 filename=/dev/nvme9n1 00:24:11.635 Could not set queue depth (nvme0n1) 00:24:11.635 Could not set queue depth (nvme10n1) 00:24:11.635 Could not set queue depth (nvme1n1) 00:24:11.635 Could not set queue depth (nvme2n1) 00:24:11.635 Could not set queue depth (nvme3n1) 00:24:11.635 Could not set queue depth (nvme4n1) 00:24:11.635 Could not set queue depth (nvme5n1) 00:24:11.635 Could not set queue depth (nvme6n1) 00:24:11.635 Could not set queue depth (nvme7n1) 00:24:11.635 Could not set queue depth (nvme8n1) 00:24:11.635 Could not set queue depth (nvme9n1) 00:24:11.635 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:11.635 fio-3.35 00:24:11.635 Starting 11 threads 00:24:21.611 00:24:21.611 job0: (groupid=0, jobs=1): err= 0: pid=1089861: Sat Jul 27 02:23:48 2024 00:24:21.611 write: IOPS=364, BW=91.0MiB/s (95.4MB/s)(925MiB/10159msec); 0 zone resets 00:24:21.611 slat (usec): min=21, max=680935, avg=2206.76, stdev=13922.25 00:24:21.611 clat (msec): min=2, max=1583, avg=173.43, stdev=192.66 00:24:21.611 lat (msec): min=6, max=1601, avg=175.64, stdev=194.70 00:24:21.611 clat percentiles (msec): 00:24:21.611 | 1.00th=[ 20], 5.00th=[ 44], 10.00th=[ 63], 20.00th=[ 80], 00:24:21.611 | 30.00th=[ 87], 40.00th=[ 115], 50.00th=[ 144], 60.00th=[ 171], 00:24:21.611 | 70.00th=[ 201], 80.00th=[ 224], 90.00th=[ 266], 95.00th=[ 309], 00:24:21.611 | 99.00th=[ 1536], 99.50th=[ 1536], 99.90th=[ 1586], 99.95th=[ 1586], 00:24:21.611 | 99.99th=[ 1586] 00:24:21.611 bw ( KiB/s): min= 2048, max=195072, per=8.58%, avg=97989.68, stdev=52081.42, samples=19 00:24:21.611 iops : min= 8, max= 762, avg=382.74, stdev=203.45, samples=19 00:24:21.611 lat (msec) : 4=0.03%, 10=0.27%, 20=0.78%, 50=6.11%, 100=30.33% 00:24:21.611 lat (msec) : 250=49.18%, 500=11.38%, 750=0.11%, 1000=0.11%, 2000=1.70% 00:24:21.611 cpu : usr=1.00%, sys=1.06%, ctx=1800, majf=0, minf=1 00:24:21.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:24:21.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.611 issued rwts: total=0,3699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.611 job1: (groupid=0, jobs=1): err= 0: pid=1089873: Sat Jul 27 02:23:48 2024 00:24:21.611 write: IOPS=397, BW=99.5MiB/s (104MB/s)(1011MiB/10163msec); 0 zone resets 00:24:21.611 slat (usec): min=23, max=1221.1k, avg=1678.89, stdev=20092.83 00:24:21.611 clat (msec): min=3, max=1573, avg=159.04, stdev=201.17 00:24:21.611 lat (msec): min=4, max=1575, avg=160.71, stdev=202.95 00:24:21.611 clat percentiles (msec): 00:24:21.611 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 50], 20.00th=[ 70], 00:24:21.611 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 111], 00:24:21.611 | 70.00th=[ 159], 80.00th=[ 203], 90.00th=[ 309], 95.00th=[ 418], 00:24:21.611 | 99.00th=[ 1469], 99.50th=[ 1502], 99.90th=[ 1552], 99.95th=[ 1569], 00:24:21.611 | 99.99th=[ 1569] 00:24:21.611 bw ( KiB/s): min=16384, max=207360, per=9.91%, avg=113252.78, stdev=61708.83, samples=18 00:24:21.611 iops : min= 64, max= 810, avg=442.39, stdev=241.05, samples=18 00:24:21.611 lat (msec) : 4=0.02%, 10=0.20%, 20=1.48%, 50=8.73%, 100=44.36% 00:24:21.611 lat (msec) : 250=31.26%, 500=10.53%, 750=1.85%, 2000=1.56% 00:24:21.611 cpu : usr=1.23%, sys=1.17%, ctx=2443, majf=0, minf=1 00:24:21.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:21.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.611 issued rwts: total=0,4044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.611 job2: (groupid=0, jobs=1): err= 0: pid=1089874: Sat Jul 27 02:23:48 2024 00:24:21.611 write: IOPS=502, BW=126MiB/s (132MB/s)(1286MiB/10241msec); 0 zone resets 00:24:21.611 slat (usec): min=17, max=329224, avg=1171.82, stdev=8945.82 00:24:21.611 clat (usec): min=1650, max=1285.8k, avg=125862.31, stdev=163766.85 00:24:21.611 lat (usec): min=1723, max=1285.8k, avg=127034.13, stdev=165372.62 00:24:21.611 clat percentiles (msec): 00:24:21.611 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 46], 00:24:21.611 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 68], 60.00th=[ 102], 00:24:21.611 | 70.00th=[ 142], 80.00th=[ 188], 90.00th=[ 236], 95.00th=[ 292], 00:24:21.611 | 99.00th=[ 944], 99.50th=[ 1200], 99.90th=[ 1267], 99.95th=[ 1284], 00:24:21.611 | 99.99th=[ 1284] 00:24:21.611 bw ( KiB/s): min=14336, max=367616, per=11.38%, avg=130032.15, stdev=96516.46, samples=20 00:24:21.611 iops : min= 56, max= 1436, avg=507.90, stdev=377.03, samples=20 00:24:21.611 lat (msec) : 2=0.14%, 4=0.56%, 10=2.27%, 20=4.30%, 50=31.11% 00:24:21.611 lat (msec) : 100=21.35%, 250=31.58%, 500=5.83%, 750=0.62%, 1000=1.32% 00:24:21.611 lat (msec) : 2000=0.91% 00:24:21.611 cpu : usr=1.63%, sys=1.52%, ctx=2963, majf=0, minf=1 00:24:21.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:21.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.611 issued rwts: total=0,5143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.611 job3: (groupid=0, jobs=1): err= 0: pid=1089875: Sat Jul 27 02:23:48 2024 00:24:21.611 write: IOPS=291, BW=72.9MiB/s (76.5MB/s)(747MiB/10242msec); 0 zone resets 00:24:21.611 slat (usec): min=25, max=1055.7k, avg=2764.68, stdev=22381.30 00:24:21.611 clat (msec): min=2, max=1583, avg=216.45, stdev=279.86 00:24:21.611 lat (msec): min=2, max=1583, avg=219.21, stdev=282.94 00:24:21.611 clat percentiles (msec): 00:24:21.611 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 69], 00:24:21.611 | 30.00th=[ 88], 40.00th=[ 108], 50.00th=[ 132], 60.00th=[ 186], 00:24:21.611 | 70.00th=[ 213], 80.00th=[ 262], 90.00th=[ 347], 95.00th=[ 944], 00:24:21.611 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1586], 99.95th=[ 1586], 00:24:21.611 | 99.99th=[ 1586] 00:24:21.611 bw ( KiB/s): min=14336, max=240640, per=7.28%, avg=83183.33, stdev=67050.44, samples=18 00:24:21.611 iops : min= 56, max= 940, avg=324.89, stdev=261.90, samples=18 00:24:21.611 lat (msec) : 4=0.17%, 10=1.61%, 20=3.05%, 50=8.37%, 100=21.89% 00:24:21.611 lat (msec) : 250=43.01%, 500=15.50%, 750=0.33%, 1000=1.51%, 2000=4.59% 00:24:21.611 cpu : usr=0.98%, sys=0.96%, ctx=1551, majf=0, minf=1 00:24:21.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:24:21.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.611 issued rwts: total=0,2988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.611 job4: (groupid=0, jobs=1): err= 0: pid=1089876: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=360, BW=90.1MiB/s (94.5MB/s)(909MiB/10081msec); 0 zone resets 00:24:21.612 slat (usec): min=26, max=758101, avg=2292.64, stdev=15416.28 00:24:21.612 clat (msec): min=4, max=1562, avg=175.09, stdev=203.49 00:24:21.612 lat (msec): min=4, max=1562, avg=177.38, stdev=205.67 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 27], 5.00th=[ 66], 10.00th=[ 83], 20.00th=[ 89], 00:24:21.612 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 126], 00:24:21.612 | 70.00th=[ 184], 80.00th=[ 230], 90.00th=[ 300], 95.00th=[ 338], 00:24:21.612 | 99.00th=[ 1536], 99.50th=[ 1552], 99.90th=[ 1552], 99.95th=[ 1569], 00:24:21.612 | 99.99th=[ 1569] 00:24:21.612 bw ( KiB/s): min= 2048, max=176128, per=8.01%, avg=91458.30, stdev=58995.58, samples=20 00:24:21.612 iops : min= 8, max= 688, avg=357.25, stdev=230.44, samples=20 00:24:21.612 lat (msec) : 10=0.17%, 20=0.74%, 50=0.74%, 100=42.59%, 250=39.81% 00:24:21.612 lat (msec) : 500=12.63%, 750=1.60%, 2000=1.73% 00:24:21.612 cpu : usr=1.15%, sys=1.26%, ctx=1624, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,3635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job5: (groupid=0, jobs=1): err= 0: pid=1089877: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=429, BW=107MiB/s (113MB/s)(1099MiB/10238msec); 0 zone resets 00:24:21.612 slat (usec): min=15, max=168009, avg=1438.79, stdev=6463.99 00:24:21.612 clat (usec): min=1996, max=1227.5k, avg=147615.39, stdev=151685.01 00:24:21.612 lat (msec): min=2, max=1227, avg=149.05, stdev=152.51 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 65], 00:24:21.612 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 102], 60.00th=[ 122], 00:24:21.612 | 70.00th=[ 167], 80.00th=[ 197], 90.00th=[ 271], 95.00th=[ 435], 00:24:21.612 | 99.00th=[ 793], 99.50th=[ 844], 99.90th=[ 927], 99.95th=[ 1217], 00:24:21.612 | 99.99th=[ 1234] 00:24:21.612 bw ( KiB/s): min=10240, max=225792, per=9.70%, avg=110848.00, stdev=67467.84, samples=20 00:24:21.612 iops : min= 40, max= 882, avg=433.00, stdev=263.55, samples=20 00:24:21.612 lat (msec) : 2=0.02%, 4=1.05%, 10=3.05%, 20=3.41%, 50=9.19% 00:24:21.612 lat (msec) : 100=32.13%, 250=39.49%, 500=7.01%, 750=3.12%, 1000=1.43% 00:24:21.612 lat (msec) : 2000=0.09% 00:24:21.612 cpu : usr=1.21%, sys=1.27%, ctx=2488, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,4394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job6: (groupid=0, jobs=1): err= 0: pid=1089878: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=552, BW=138MiB/s (145MB/s)(1394MiB/10083msec); 0 zone resets 00:24:21.612 slat (usec): min=16, max=69743, avg=1173.49, stdev=2873.73 00:24:21.612 clat (usec): min=1889, max=1412.2k, avg=114558.64, stdev=131757.45 00:24:21.612 lat (usec): min=1941, max=1412.3k, avg=115732.14, stdev=131805.04 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 4], 5.00th=[ 25], 10.00th=[ 53], 20.00th=[ 72], 00:24:21.612 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 100], 00:24:21.612 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 165], 95.00th=[ 186], 00:24:21.612 | 99.00th=[ 1045], 99.50th=[ 1217], 99.90th=[ 1401], 99.95th=[ 1401], 00:24:21.612 | 99.99th=[ 1418] 00:24:21.612 bw ( KiB/s): min=25088, max=215552, per=12.35%, avg=141070.55, stdev=53501.91, samples=20 00:24:21.612 iops : min= 98, max= 842, avg=551.05, stdev=209.00, samples=20 00:24:21.612 lat (msec) : 2=0.04%, 4=1.06%, 10=1.36%, 20=1.69%, 50=4.11% 00:24:21.612 lat (msec) : 100=52.87%, 250=36.42%, 500=0.97%, 750=0.45%, 2000=1.04% 00:24:21.612 cpu : usr=1.73%, sys=1.60%, ctx=2572, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,5574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job7: (groupid=0, jobs=1): err= 0: pid=1089879: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=484, BW=121MiB/s (127MB/s)(1231MiB/10165msec); 0 zone resets 00:24:21.612 slat (usec): min=16, max=208466, avg=1249.40, stdev=5164.93 00:24:21.612 clat (usec): min=1730, max=1912.3k, avg=130859.29, stdev=191918.00 00:24:21.612 lat (usec): min=1761, max=1914.3k, avg=132108.69, stdev=192510.17 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 15], 20.00th=[ 16], 00:24:21.612 | 30.00th=[ 19], 40.00th=[ 78], 50.00th=[ 90], 60.00th=[ 134], 00:24:21.612 | 70.00th=[ 169], 80.00th=[ 190], 90.00th=[ 253], 95.00th=[ 300], 00:24:21.612 | 99.00th=[ 978], 99.50th=[ 1804], 99.90th=[ 1888], 99.95th=[ 1905], 00:24:21.612 | 99.99th=[ 1905] 00:24:21.612 bw ( KiB/s): min=65536, max=269312, per=10.89%, avg=124390.40, stdev=56365.91, samples=20 00:24:21.612 iops : min= 256, max= 1052, avg=485.90, stdev=220.18, samples=20 00:24:21.612 lat (msec) : 2=0.06%, 4=0.06%, 10=3.86%, 20=26.59%, 50=2.89% 00:24:21.612 lat (msec) : 100=20.58%, 250=35.51%, 500=8.92%, 750=0.51%, 1000=0.02% 00:24:21.612 lat (msec) : 2000=1.00% 00:24:21.612 cpu : usr=1.39%, sys=1.96%, ctx=3205, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,4922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job8: (groupid=0, jobs=1): err= 0: pid=1089880: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=407, BW=102MiB/s (107MB/s)(1030MiB/10099msec); 0 zone resets 00:24:21.612 slat (usec): min=22, max=649868, avg=1541.21, stdev=12232.38 00:24:21.612 clat (msec): min=2, max=1646, avg=155.16, stdev=199.16 00:24:21.612 lat (msec): min=2, max=1650, avg=156.71, stdev=200.03 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 27], 20.00th=[ 50], 00:24:21.612 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 109], 60.00th=[ 146], 00:24:21.612 | 70.00th=[ 184], 80.00th=[ 211], 90.00th=[ 255], 95.00th=[ 351], 00:24:21.612 | 99.00th=[ 1418], 99.50th=[ 1586], 99.90th=[ 1636], 99.95th=[ 1636], 00:24:21.612 | 99.99th=[ 1653] 00:24:21.612 bw ( KiB/s): min=10240, max=178176, per=9.09%, avg=103808.00, stdev=48325.08, samples=20 00:24:21.612 iops : min= 40, max= 696, avg=405.50, stdev=188.77, samples=20 00:24:21.612 lat (msec) : 4=0.05%, 10=4.06%, 20=4.25%, 50=11.78%, 100=27.56% 00:24:21.612 lat (msec) : 250=41.60%, 500=7.53%, 750=1.65%, 2000=1.53% 00:24:21.612 cpu : usr=1.26%, sys=1.27%, ctx=2641, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,4118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job9: (groupid=0, jobs=1): err= 0: pid=1089881: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=488, BW=122MiB/s (128MB/s)(1246MiB/10195msec); 0 zone resets 00:24:21.612 slat (usec): min=14, max=701975, avg=1154.49, stdev=11926.63 00:24:21.612 clat (msec): min=2, max=1600, avg=129.76, stdev=214.03 00:24:21.612 lat (msec): min=2, max=1611, avg=130.92, stdev=214.93 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 26], 00:24:21.612 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 67], 60.00th=[ 91], 00:24:21.612 | 70.00th=[ 127], 80.00th=[ 171], 90.00th=[ 230], 95.00th=[ 317], 00:24:21.612 | 99.00th=[ 1418], 99.50th=[ 1569], 99.90th=[ 1603], 99.95th=[ 1603], 00:24:21.612 | 99.99th=[ 1603] 00:24:21.612 bw ( KiB/s): min= 512, max=281088, per=11.02%, avg=125912.90, stdev=72213.37, samples=20 00:24:21.612 iops : min= 2, max= 1098, avg=491.80, stdev=282.09, samples=20 00:24:21.612 lat (msec) : 4=0.08%, 10=1.45%, 20=16.00%, 50=16.02%, 100=28.32% 00:24:21.612 lat (msec) : 250=30.15%, 500=4.01%, 750=0.18%, 1000=1.81%, 2000=1.99% 00:24:21.612 cpu : usr=1.21%, sys=1.68%, ctx=3106, majf=0, minf=1 00:24:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.612 issued rwts: total=0,4982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.612 job10: (groupid=0, jobs=1): err= 0: pid=1089882: Sat Jul 27 02:23:48 2024 00:24:21.612 write: IOPS=216, BW=54.1MiB/s (56.7MB/s)(550MiB/10175msec); 0 zone resets 00:24:21.612 slat (usec): min=24, max=861414, avg=4540.87, stdev=21583.83 00:24:21.612 clat (msec): min=13, max=1573, avg=290.46, stdev=234.43 00:24:21.612 lat (msec): min=13, max=1573, avg=295.00, stdev=236.50 00:24:21.612 clat percentiles (msec): 00:24:21.612 | 1.00th=[ 46], 5.00th=[ 130], 10.00th=[ 148], 20.00th=[ 186], 00:24:21.612 | 30.00th=[ 203], 40.00th=[ 218], 50.00th=[ 239], 60.00th=[ 266], 00:24:21.612 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 384], 95.00th=[ 575], 00:24:21.612 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:24:21.612 | 99.99th=[ 1569] 00:24:21.612 bw ( KiB/s): min= 2048, max=101376, per=5.04%, avg=57586.53, stdev=25610.45, samples=19 00:24:21.613 iops : min= 8, max= 396, avg=224.95, stdev=100.04, samples=19 00:24:21.613 lat (msec) : 20=0.36%, 50=1.55%, 100=1.27%, 250=49.59%, 500=41.50% 00:24:21.613 lat (msec) : 750=2.86%, 2000=2.86% 00:24:21.613 cpu : usr=0.68%, sys=0.66%, ctx=592, majf=0, minf=1 00:24:21.613 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:24:21.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:21.613 issued rwts: total=0,2200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:21.613 00:24:21.613 Run status group 0 (all jobs): 00:24:21.613 WRITE: bw=1115MiB/s (1170MB/s), 54.1MiB/s-138MiB/s (56.7MB/s-145MB/s), io=11.2GiB (12.0GB), run=10081-10242msec 00:24:21.613 00:24:21.613 Disk stats (read/write): 00:24:21.613 nvme0n1: ios=49/7361, merge=0/0, ticks=1300/1233710, in_queue=1235010, util=99.89% 00:24:21.613 nvme10n1: ios=38/8050, merge=0/0, ticks=839/1245307, in_queue=1246146, util=100.00% 00:24:21.613 nvme1n1: ios=34/10172, merge=0/0, ticks=1848/1110543, in_queue=1112391, util=100.00% 00:24:21.613 nvme2n1: ios=42/5855, merge=0/0, ticks=1453/1145046, in_queue=1146499, util=100.00% 00:24:21.613 nvme3n1: ios=43/6981, merge=0/0, ticks=1389/1212883, in_queue=1214272, util=100.00% 00:24:21.613 nvme4n1: ios=0/8671, merge=0/0, ticks=0/1186735, in_queue=1186735, util=98.04% 00:24:21.613 nvme5n1: ios=0/10857, merge=0/0, ticks=0/1219376, in_queue=1219376, util=98.17% 00:24:21.613 nvme6n1: ios=0/9815, merge=0/0, ticks=0/1246075, in_queue=1246075, util=98.34% 00:24:21.613 nvme7n1: ios=16/8037, merge=0/0, ticks=928/1213955, in_queue=1214883, util=99.97% 00:24:21.613 nvme8n1: ios=0/9904, merge=0/0, ticks=0/1189303, in_queue=1189303, util=98.97% 00:24:21.613 nvme9n1: ios=46/4357, merge=0/0, ticks=1637/1222734, in_queue=1224371, util=100.00% 00:24:21.613 02:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:21.613 02:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:21.613 02:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.613 02:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:21.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:21.613 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:21.613 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.613 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:21.873 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.873 02:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:22.132 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.132 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:22.392 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.392 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:22.652 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:22.652 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.652 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:22.910 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.910 02:23:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:23.168 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:23.168 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:23.168 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.169 rmmod nvme_tcp 00:24:23.169 rmmod nvme_fabrics 00:24:23.169 rmmod nvme_keyring 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1084241 ']' 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1084241 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1084241 ']' 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1084241 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084241 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084241' 00:24:23.169 killing process with pid 1084241 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1084241 00:24:23.169 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1084241 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.736 02:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.272 00:24:26.272 real 1m0.610s 00:24:26.272 user 3m22.170s 00:24:26.272 sys 0m22.532s 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:26.272 ************************************ 00:24:26.272 END TEST nvmf_multiconnection 00:24:26.272 ************************************ 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.272 ************************************ 00:24:26.272 START TEST nvmf_initiator_timeout 00:24:26.272 ************************************ 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:26.272 * Looking for test storage... 00:24:26.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.272 02:23:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.179 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.180 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.180 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:24:28.180 00:24:28.180 --- 10.0.0.2 ping statistics --- 00:24:28.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.180 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:28.180 00:24:28.180 --- 10.0.0.1 ping statistics --- 00:24:28.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.180 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:28.180 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.181 02:23:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1093045 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1093045 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1093045 ']' 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.181 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.181 [2024-07-27 02:23:56.055263] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:24:28.181 [2024-07-27 02:23:56.055354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.181 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.181 [2024-07-27 02:23:56.093647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:28.181 [2024-07-27 02:23:56.124625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.181 [2024-07-27 02:23:56.216990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.181 [2024-07-27 02:23:56.217070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.181 [2024-07-27 02:23:56.217088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.181 [2024-07-27 02:23:56.217102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.181 [2024-07-27 02:23:56.217114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.181 [2024-07-27 02:23:56.217193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.181 [2024-07-27 02:23:56.217248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.181 [2024-07-27 02:23:56.217433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.181 [2024-07-27 02:23:56.217436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 Malloc0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 Delay0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 [2024-07-27 02:23:56.407377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.440 [2024-07-27 02:23:56.435701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.440 02:23:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:29.007 02:23:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:29.007 02:23:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:29.007 02:23:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.007 02:23:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:29.007 02:23:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:30.914 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1093373 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:30.915 02:23:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:30.915 [global] 00:24:30.915 thread=1 00:24:30.915 invalidate=1 00:24:30.915 rw=write 00:24:30.915 time_based=1 00:24:30.915 runtime=60 00:24:30.915 ioengine=libaio 00:24:30.915 direct=1 00:24:30.915 bs=4096 00:24:30.915 iodepth=1 00:24:30.915 norandommap=0 00:24:30.915 numjobs=1 00:24:30.915 00:24:30.915 verify_dump=1 00:24:30.915 verify_backlog=512 00:24:30.915 verify_state_save=0 00:24:30.915 do_verify=1 00:24:30.915 verify=crc32c-intel 00:24:30.915 [job0] 00:24:30.915 filename=/dev/nvme0n1 00:24:31.175 Could not set queue depth (nvme0n1) 00:24:31.175 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:31.175 fio-3.35 00:24:31.175 Starting 1 thread 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.495 true 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.495 true 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.495 true 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.495 true 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.495 02:24:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:37.025 true 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:37.025 true 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:37.025 true 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:37.025 true 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:37.025 02:24:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1093373 00:25:33.255 00:25:33.255 job0: (groupid=0, jobs=1): err= 0: pid=1093543: Sat Jul 27 02:24:59 2024 00:25:33.255 read: IOPS=105, BW=422KiB/s (433kB/s)(24.8MiB/60001msec) 00:25:33.255 slat (nsec): min=5361, max=76204, avg=12664.12, stdev=6685.71 00:25:33.255 clat (usec): min=335, max=41145k, avg=9113.07, stdev=516911.02 00:25:33.255 lat (usec): min=343, max=41145k, avg=9125.73, stdev=516911.07 00:25:33.255 clat percentiles (usec): 00:25:33.255 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 359], 00:25:33.255 | 20.00th=[ 363], 30.00th=[ 371], 40.00th=[ 375], 00:25:33.255 | 50.00th=[ 383], 60.00th=[ 392], 70.00th=[ 400], 00:25:33.255 | 80.00th=[ 412], 90.00th=[ 545], 95.00th=[ 41157], 00:25:33.255 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:25:33.255 | 99.95th=[ 41681], 99.99th=[17112761] 00:25:33.255 write: IOPS=110, BW=444KiB/s (454kB/s)(26.0MiB/60001msec); 0 zone resets 00:25:33.255 slat (nsec): min=6450, max=75853, avg=17114.95, stdev=8302.50 00:25:33.255 clat (usec): min=227, max=1282, avg=300.60, stdev=41.76 00:25:33.255 lat (usec): min=235, max=1299, avg=317.71, stdev=46.80 00:25:33.255 clat percentiles (usec): 00:25:33.255 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:25:33.255 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:25:33.255 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 375], 00:25:33.255 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 486], 00:25:33.255 | 99.99th=[ 1287] 00:25:33.255 bw ( KiB/s): min= 4096, max= 7520, per=100.00%, avg=5345.78, stdev=1446.00, samples=9 00:25:33.255 iops : min= 1024, max= 1880, avg=1336.44, stdev=361.50, samples=9 00:25:33.255 lat (usec) : 250=3.60%, 500=89.15%, 750=4.49%, 1000=0.04% 00:25:33.255 lat (msec) : 2=0.04%, 50=2.67%, >=2000=0.01% 00:25:33.255 cpu : usr=0.24%, sys=0.45%, ctx=12995, majf=0, minf=2 00:25:33.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:33.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.255 issued rwts: total=6337,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:33.255 00:25:33.255 Run status group 0 (all jobs): 00:25:33.255 READ: bw=422KiB/s (433kB/s), 422KiB/s-422KiB/s (433kB/s-433kB/s), io=24.8MiB (26.0MB), run=60001-60001msec 00:25:33.255 WRITE: bw=444KiB/s (454kB/s), 444KiB/s-444KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60001-60001msec 00:25:33.255 00:25:33.255 Disk stats (read/write): 00:25:33.255 nvme0n1: ios=6243/6569, merge=0/0, ticks=16548/1851, in_queue=18399, util=99.60% 00:25:33.255 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:33.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:33.255 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:33.255 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:33.256 nvmf hotplug test: fio successful as expected 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:33.256 rmmod nvme_tcp 00:25:33.256 rmmod nvme_fabrics 00:25:33.256 rmmod nvme_keyring 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1093045 ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1093045 ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093045' 00:25:33.256 killing process with pid 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1093045 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.256 02:24:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.827 00:25:33.827 real 1m8.053s 00:25:33.827 user 4m10.880s 00:25:33.827 sys 0m6.475s 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.827 ************************************ 00:25:33.827 END TEST nvmf_initiator_timeout 00:25:33.827 ************************************ 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.827 02:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:35.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:35.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:35.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:35.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:35.736 ************************************ 00:25:35.736 START TEST nvmf_perf_adq 00:25:35.736 ************************************ 00:25:35.736 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:35.995 * Looking for test storage... 00:25:35.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.995 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.996 02:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:37.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:37.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.898 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:37.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:37.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:37.899 02:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:38.466 02:25:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:40.373 02:25:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.668 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:45.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:45.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:45.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:45.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:25:45.669 00:25:45.669 --- 10.0.0.2 ping statistics --- 00:25:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.669 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:25:45.669 00:25:45.669 --- 10.0.0.1 ping statistics --- 00:25:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.669 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1105668 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1105668 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1105668 ']' 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:45.669 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.669 [2024-07-27 02:25:13.694039] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:25:45.669 [2024-07-27 02:25:13.694128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.669 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.669 [2024-07-27 02:25:13.731429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:45.669 [2024-07-27 02:25:13.763490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.927 [2024-07-27 02:25:13.854432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.927 [2024-07-27 02:25:13.854490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.927 [2024-07-27 02:25:13.854507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.927 [2024-07-27 02:25:13.854521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.927 [2024-07-27 02:25:13.854533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.927 [2024-07-27 02:25:13.854628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.927 [2024-07-27 02:25:13.854663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.927 [2024-07-27 02:25:13.854782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.927 [2024-07-27 02:25:13.854784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:45.927 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.928 02:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:45.928 [2024-07-27 02:25:14.079732] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.928 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:46.185 Malloc1 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:46.185 [2024-07-27 02:25:14.133136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1105702 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:46.185 02:25:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:46.185 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:48.085 "tick_rate": 2700000000, 00:25:48.085 "poll_groups": [ 00:25:48.085 { 00:25:48.085 "name": "nvmf_tgt_poll_group_000", 00:25:48.085 "admin_qpairs": 1, 00:25:48.085 "io_qpairs": 1, 00:25:48.085 "current_admin_qpairs": 1, 00:25:48.085 "current_io_qpairs": 1, 00:25:48.085 "pending_bdev_io": 0, 00:25:48.085 "completed_nvme_io": 20488, 00:25:48.085 "transports": [ 00:25:48.085 { 00:25:48.085 "trtype": "TCP" 00:25:48.085 } 00:25:48.085 ] 00:25:48.085 }, 00:25:48.085 { 00:25:48.085 "name": "nvmf_tgt_poll_group_001", 00:25:48.085 "admin_qpairs": 0, 00:25:48.085 "io_qpairs": 1, 00:25:48.085 "current_admin_qpairs": 0, 00:25:48.085 "current_io_qpairs": 1, 00:25:48.085 "pending_bdev_io": 0, 00:25:48.085 "completed_nvme_io": 16499, 00:25:48.085 "transports": [ 00:25:48.085 { 00:25:48.085 "trtype": "TCP" 00:25:48.085 } 00:25:48.085 ] 00:25:48.085 }, 00:25:48.085 { 00:25:48.085 "name": "nvmf_tgt_poll_group_002", 00:25:48.085 "admin_qpairs": 0, 00:25:48.085 "io_qpairs": 1, 00:25:48.085 "current_admin_qpairs": 0, 00:25:48.085 "current_io_qpairs": 1, 00:25:48.085 "pending_bdev_io": 0, 00:25:48.085 "completed_nvme_io": 20354, 00:25:48.085 "transports": [ 00:25:48.085 { 00:25:48.085 "trtype": "TCP" 00:25:48.085 } 00:25:48.085 ] 00:25:48.085 }, 00:25:48.085 { 00:25:48.085 "name": "nvmf_tgt_poll_group_003", 00:25:48.085 "admin_qpairs": 0, 00:25:48.085 "io_qpairs": 1, 00:25:48.085 "current_admin_qpairs": 0, 00:25:48.085 "current_io_qpairs": 1, 00:25:48.085 "pending_bdev_io": 0, 00:25:48.085 "completed_nvme_io": 20519, 00:25:48.085 "transports": [ 00:25:48.085 { 00:25:48.085 "trtype": "TCP" 00:25:48.085 } 00:25:48.085 ] 00:25:48.085 } 00:25:48.085 ] 00:25:48.085 }' 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:48.085 02:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1105702 00:25:56.193 Initializing NVMe Controllers 00:25:56.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:56.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:56.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:56.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:56.194 Initialization complete. Launching workers. 00:25:56.194 ======================================================== 00:25:56.194 Latency(us) 00:25:56.194 Device Information : IOPS MiB/s Average min max 00:25:56.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10759.60 42.03 5949.05 2864.84 9262.18 00:25:56.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8796.50 34.36 7275.88 4030.51 11479.42 00:25:56.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10702.70 41.81 5981.41 2507.19 9843.66 00:25:56.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10751.70 42.00 5953.10 2111.99 8765.93 00:25:56.194 ======================================================== 00:25:56.194 Total : 41010.50 160.20 6243.15 2111.99 11479.42 00:25:56.194 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:56.194 rmmod nvme_tcp 00:25:56.194 rmmod nvme_fabrics 00:25:56.194 rmmod nvme_keyring 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1105668 ']' 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1105668 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1105668 ']' 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1105668 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.194 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1105668 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1105668' 00:25:56.452 killing process with pid 1105668 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1105668 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1105668 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.452 02:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.993 02:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:58.993 02:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:25:58.993 02:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:59.253 02:25:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:01.788 02:25:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:07.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:07.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:07.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:07.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.059 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:26:07.060 00:26:07.060 --- 10.0.0.2 ping statistics --- 00:26:07.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.060 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:26:07.060 00:26:07.060 --- 10.0.0.1 ping statistics --- 00:26:07.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.060 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:07.060 net.core.busy_poll = 1 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:07.060 net.core.busy_read = 1 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1108315 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1108315 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1108315 ']' 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 [2024-07-27 02:25:34.710668] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:07.060 [2024-07-27 02:25:34.710760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.060 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.060 [2024-07-27 02:25:34.749498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:07.060 [2024-07-27 02:25:34.776612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.060 [2024-07-27 02:25:34.865193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.060 [2024-07-27 02:25:34.865248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.060 [2024-07-27 02:25:34.865261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.060 [2024-07-27 02:25:34.865273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.060 [2024-07-27 02:25:34.865283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.060 [2024-07-27 02:25:34.865339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.060 [2024-07-27 02:25:34.865400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.060 [2024-07-27 02:25:34.865465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.060 [2024-07-27 02:25:34.865467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 02:25:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 [2024-07-27 02:25:35.104497] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.060 Malloc1 00:26:07.060 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:07.061 [2024-07-27 02:25:35.158034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1108460 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:07.061 02:25:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:07.061 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:09.592 "tick_rate": 2700000000, 00:26:09.592 "poll_groups": [ 00:26:09.592 { 00:26:09.592 "name": "nvmf_tgt_poll_group_000", 00:26:09.592 "admin_qpairs": 1, 00:26:09.592 "io_qpairs": 2, 00:26:09.592 "current_admin_qpairs": 1, 00:26:09.592 "current_io_qpairs": 2, 00:26:09.592 "pending_bdev_io": 0, 00:26:09.592 "completed_nvme_io": 26444, 00:26:09.592 "transports": [ 00:26:09.592 { 00:26:09.592 "trtype": "TCP" 00:26:09.592 } 00:26:09.592 ] 00:26:09.592 }, 00:26:09.592 { 00:26:09.592 "name": "nvmf_tgt_poll_group_001", 00:26:09.592 "admin_qpairs": 0, 00:26:09.592 "io_qpairs": 2, 00:26:09.592 "current_admin_qpairs": 0, 00:26:09.592 "current_io_qpairs": 2, 00:26:09.592 "pending_bdev_io": 0, 00:26:09.592 "completed_nvme_io": 24492, 00:26:09.592 "transports": [ 00:26:09.592 { 00:26:09.592 "trtype": "TCP" 00:26:09.592 } 00:26:09.592 ] 00:26:09.592 }, 00:26:09.592 { 00:26:09.592 "name": "nvmf_tgt_poll_group_002", 00:26:09.592 "admin_qpairs": 0, 00:26:09.592 "io_qpairs": 0, 00:26:09.592 "current_admin_qpairs": 0, 00:26:09.592 "current_io_qpairs": 0, 00:26:09.592 "pending_bdev_io": 0, 00:26:09.592 "completed_nvme_io": 0, 00:26:09.592 "transports": [ 00:26:09.592 { 00:26:09.592 "trtype": "TCP" 00:26:09.592 } 00:26:09.592 ] 00:26:09.592 }, 00:26:09.592 { 00:26:09.592 "name": "nvmf_tgt_poll_group_003", 00:26:09.592 "admin_qpairs": 0, 00:26:09.592 "io_qpairs": 0, 00:26:09.592 "current_admin_qpairs": 0, 00:26:09.592 "current_io_qpairs": 0, 00:26:09.592 "pending_bdev_io": 0, 00:26:09.592 "completed_nvme_io": 0, 00:26:09.592 "transports": [ 00:26:09.592 { 00:26:09.592 "trtype": "TCP" 00:26:09.592 } 00:26:09.592 ] 00:26:09.592 } 00:26:09.592 ] 00:26:09.592 }' 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:09.592 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:09.593 02:25:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1108460 00:26:17.732 Initializing NVMe Controllers 00:26:17.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:17.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:17.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:17.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:17.732 Initialization complete. Launching workers. 00:26:17.732 ======================================================== 00:26:17.732 Latency(us) 00:26:17.732 Device Information : IOPS MiB/s Average min max 00:26:17.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6564.40 25.64 9751.35 1645.22 53975.69 00:26:17.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7931.80 30.98 8068.54 1751.20 52971.35 00:26:17.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6438.40 25.15 9971.93 1798.27 54965.96 00:26:17.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6065.90 23.69 10551.13 1869.68 53330.08 00:26:17.732 ======================================================== 00:26:17.732 Total : 27000.49 105.47 9489.28 1645.22 54965.96 00:26:17.732 00:26:17.732 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:17.733 rmmod nvme_tcp 00:26:17.733 rmmod nvme_fabrics 00:26:17.733 rmmod nvme_keyring 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1108315 ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1108315 ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1108315' 00:26:17.733 killing process with pid 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1108315 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.733 02:25:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:21.023 00:26:21.023 real 0m44.828s 00:26:21.023 user 2m36.249s 00:26:21.023 sys 0m10.711s 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.023 ************************************ 00:26:21.023 END TEST nvmf_perf_adq 00:26:21.023 ************************************ 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:21.023 ************************************ 00:26:21.023 START TEST nvmf_shutdown 00:26:21.023 ************************************ 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:21.023 * Looking for test storage... 00:26:21.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:21.023 ************************************ 00:26:21.023 START TEST nvmf_shutdown_tc1 00:26:21.023 ************************************ 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.023 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.024 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.024 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.024 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.024 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.024 02:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:22.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:22.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:22.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:22.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.928 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:26:22.929 00:26:22.929 --- 10.0.0.2 ping statistics --- 00:26:22.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.929 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:26:22.929 00:26:22.929 --- 10.0.0.1 ping statistics --- 00:26:22.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.929 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1111743 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1111743 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1111743 ']' 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.929 02:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 [2024-07-27 02:25:51.036535] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:22.929 [2024-07-27 02:25:51.036612] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.929 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.929 [2024-07-27 02:25:51.075185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:23.188 [2024-07-27 02:25:51.104462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.188 [2024-07-27 02:25:51.195146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.188 [2024-07-27 02:25:51.195200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.188 [2024-07-27 02:25:51.195228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.188 [2024-07-27 02:25:51.195239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.188 [2024-07-27 02:25:51.195248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.188 [2024-07-27 02:25:51.195343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.188 [2024-07-27 02:25:51.195405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.188 [2024-07-27 02:25:51.195454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:23.188 [2024-07-27 02:25:51.195456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.188 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.448 [2024-07-27 02:25:51.353644] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.448 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.448 Malloc1 00:26:23.448 [2024-07-27 02:25:51.443412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.448 Malloc2 00:26:23.448 Malloc3 00:26:23.448 Malloc4 00:26:23.710 Malloc5 00:26:23.710 Malloc6 00:26:23.710 Malloc7 00:26:23.710 Malloc8 00:26:23.710 Malloc9 00:26:23.969 Malloc10 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1111844 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1111844 /var/tmp/bdevperf.sock 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1111844 ']' 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.969 { 00:26:23.969 "params": { 00:26:23.969 "name": "Nvme$subsystem", 00:26:23.969 "trtype": "$TEST_TRANSPORT", 00:26:23.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.969 "adrfam": "ipv4", 00:26:23.969 "trsvcid": "$NVMF_PORT", 00:26:23.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.969 "hdgst": ${hdgst:-false}, 00:26:23.969 "ddgst": ${ddgst:-false} 00:26:23.969 }, 00:26:23.969 "method": "bdev_nvme_attach_controller" 00:26:23.969 } 00:26:23.969 EOF 00:26:23.969 )") 00:26:23.969 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.970 { 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme$subsystem", 00:26:23.970 "trtype": "$TEST_TRANSPORT", 00:26:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "$NVMF_PORT", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.970 "hdgst": ${hdgst:-false}, 00:26:23.970 "ddgst": ${ddgst:-false} 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 } 00:26:23.970 EOF 00:26:23.970 )") 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.970 { 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme$subsystem", 00:26:23.970 "trtype": "$TEST_TRANSPORT", 00:26:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "$NVMF_PORT", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.970 "hdgst": ${hdgst:-false}, 00:26:23.970 "ddgst": ${ddgst:-false} 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 } 00:26:23.970 EOF 00:26:23.970 )") 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.970 { 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme$subsystem", 00:26:23.970 "trtype": "$TEST_TRANSPORT", 00:26:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "$NVMF_PORT", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.970 "hdgst": ${hdgst:-false}, 00:26:23.970 "ddgst": ${ddgst:-false} 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 } 00:26:23.970 EOF 00:26:23.970 )") 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.970 { 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme$subsystem", 00:26:23.970 "trtype": "$TEST_TRANSPORT", 00:26:23.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "$NVMF_PORT", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.970 "hdgst": ${hdgst:-false}, 00:26:23.970 "ddgst": ${ddgst:-false} 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 } 00:26:23.970 EOF 00:26:23.970 )") 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:23.970 02:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme1", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme2", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme3", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme4", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme5", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme6", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme7", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme8", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme9", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 },{ 00:26:23.970 "params": { 00:26:23.970 "name": "Nvme10", 00:26:23.970 "trtype": "tcp", 00:26:23.970 "traddr": "10.0.0.2", 00:26:23.970 "adrfam": "ipv4", 00:26:23.970 "trsvcid": "4420", 00:26:23.970 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:23.970 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:23.970 "hdgst": false, 00:26:23.970 "ddgst": false 00:26:23.970 }, 00:26:23.970 "method": "bdev_nvme_attach_controller" 00:26:23.970 }' 00:26:23.970 [2024-07-27 02:25:51.964247] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:23.971 [2024-07-27 02:25:51.964327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:23.971 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.971 [2024-07-27 02:25:52.001559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:23.971 [2024-07-27 02:25:52.030709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.971 [2024-07-27 02:25:52.116994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1111844 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:25.875 02:25:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:26.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1111844 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1111743 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.811 { 00:26:26.811 "params": { 00:26:26.811 "name": "Nvme$subsystem", 00:26:26.811 "trtype": "$TEST_TRANSPORT", 00:26:26.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.811 "adrfam": "ipv4", 00:26:26.811 "trsvcid": "$NVMF_PORT", 00:26:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.811 "hdgst": ${hdgst:-false}, 00:26:26.811 "ddgst": ${ddgst:-false} 00:26:26.811 }, 00:26:26.811 "method": "bdev_nvme_attach_controller" 00:26:26.811 } 00:26:26.811 EOF 00:26:26.811 )") 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.811 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.812 { 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme$subsystem", 00:26:26.812 "trtype": "$TEST_TRANSPORT", 00:26:26.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "$NVMF_PORT", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.812 "hdgst": ${hdgst:-false}, 00:26:26.812 "ddgst": ${ddgst:-false} 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 } 00:26:26.812 EOF 00:26:26.812 )") 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.812 { 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme$subsystem", 00:26:26.812 "trtype": "$TEST_TRANSPORT", 00:26:26.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "$NVMF_PORT", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.812 "hdgst": ${hdgst:-false}, 00:26:26.812 "ddgst": ${ddgst:-false} 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 } 00:26:26.812 EOF 00:26:26.812 )") 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:26.812 02:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme1", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme2", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme3", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme4", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme5", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme6", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme7", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme8", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme9", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 },{ 00:26:26.812 "params": { 00:26:26.812 "name": "Nvme10", 00:26:26.812 "trtype": "tcp", 00:26:26.812 "traddr": "10.0.0.2", 00:26:26.812 "adrfam": "ipv4", 00:26:26.812 "trsvcid": "4420", 00:26:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:26.812 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:26.812 "hdgst": false, 00:26:26.812 "ddgst": false 00:26:26.812 }, 00:26:26.812 "method": "bdev_nvme_attach_controller" 00:26:26.812 }' 00:26:27.070 [2024-07-27 02:25:54.971910] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:27.070 [2024-07-27 02:25:54.972003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112232 ] 00:26:27.070 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.070 [2024-07-27 02:25:55.008484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:27.070 [2024-07-27 02:25:55.037475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.070 [2024-07-27 02:25:55.124119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.972 Running I/O for 1 seconds... 00:26:29.910 00:26:29.910 Latency(us) 00:26:29.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.910 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme1n1 : 1.04 245.17 15.32 0.00 0.00 258183.02 20777.34 251658.24 00:26:29.910 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme2n1 : 1.11 231.34 14.46 0.00 0.00 269362.82 18641.35 264085.81 00:26:29.910 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme3n1 : 1.16 220.03 13.75 0.00 0.00 276446.81 12184.84 267192.70 00:26:29.910 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme4n1 : 1.16 220.83 13.80 0.00 0.00 273392.83 22136.60 265639.25 00:26:29.910 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme5n1 : 1.11 230.06 14.38 0.00 0.00 257101.56 19029.71 264085.81 00:26:29.910 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme6n1 : 1.18 217.79 13.61 0.00 0.00 268322.32 24466.77 259425.47 00:26:29.910 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme7n1 : 1.18 216.78 13.55 0.00 0.00 265247.86 20000.62 302921.96 00:26:29.910 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme8n1 : 1.17 222.40 13.90 0.00 0.00 251856.15 12281.93 287387.50 00:26:29.910 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme9n1 : 1.17 219.58 13.72 0.00 0.00 252310.00 19515.16 259425.47 00:26:29.910 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.910 Verification LBA range: start 0x0 length 0x400 00:26:29.910 Nvme10n1 : 1.19 269.66 16.85 0.00 0.00 202662.34 15825.73 274959.93 00:26:29.910 =================================================================================================================== 00:26:29.910 Total : 2293.63 143.35 0.00 0.00 256144.81 12184.84 302921.96 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.171 rmmod nvme_tcp 00:26:30.171 rmmod nvme_fabrics 00:26:30.171 rmmod nvme_keyring 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1111743 ']' 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1111743 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1111743 ']' 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1111743 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1111743 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1111743' 00:26:30.171 killing process with pid 1111743 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1111743 00:26:30.171 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1111743 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.739 02:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.643 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.643 00:26:32.643 real 0m11.943s 00:26:32.643 user 0m34.500s 00:26:32.643 sys 0m3.386s 00:26:32.643 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.643 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.643 ************************************ 00:26:32.643 END TEST nvmf_shutdown_tc1 00:26:32.643 ************************************ 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:32.902 ************************************ 00:26:32.902 START TEST nvmf_shutdown_tc2 00:26:32.902 ************************************ 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.902 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:32.903 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:32.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:32.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:32.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:32.903 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:32.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:26:32.904 00:26:32.904 --- 10.0.0.2 ping statistics --- 00:26:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.904 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:32.904 00:26:32.904 --- 10.0.0.1 ping statistics --- 00:26:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.904 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.904 02:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1113045 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1113045 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1113045 ']' 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.904 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.162 [2024-07-27 02:26:01.067172] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:33.162 [2024-07-27 02:26:01.067254] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.162 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.162 [2024-07-27 02:26:01.106617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:33.162 [2024-07-27 02:26:01.137419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.162 [2024-07-27 02:26:01.228937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.162 [2024-07-27 02:26:01.228996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.162 [2024-07-27 02:26:01.229013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.162 [2024-07-27 02:26:01.229027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.162 [2024-07-27 02:26:01.229038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.162 [2024-07-27 02:26:01.229148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.162 [2024-07-27 02:26:01.229245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.162 [2024-07-27 02:26:01.229292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:33.162 [2024-07-27 02:26:01.229294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.422 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.423 [2024-07-27 02:26:01.370492] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.423 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.423 Malloc1 00:26:33.423 [2024-07-27 02:26:01.445489] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.423 Malloc2 00:26:33.423 Malloc3 00:26:33.423 Malloc4 00:26:33.683 Malloc5 00:26:33.683 Malloc6 00:26:33.683 Malloc7 00:26:33.683 Malloc8 00:26:33.683 Malloc9 00:26:33.971 Malloc10 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1113177 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1113177 /var/tmp/bdevperf.sock 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1113177 ']' 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:33.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.971 { 00:26:33.971 "params": { 00:26:33.971 "name": "Nvme$subsystem", 00:26:33.971 "trtype": "$TEST_TRANSPORT", 00:26:33.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.971 "adrfam": "ipv4", 00:26:33.971 "trsvcid": "$NVMF_PORT", 00:26:33.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.971 "hdgst": ${hdgst:-false}, 00:26:33.971 "ddgst": ${ddgst:-false} 00:26:33.971 }, 00:26:33.971 "method": "bdev_nvme_attach_controller" 00:26:33.971 } 00:26:33.971 EOF 00:26:33.971 )") 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.971 { 00:26:33.971 "params": { 00:26:33.971 "name": "Nvme$subsystem", 00:26:33.971 "trtype": "$TEST_TRANSPORT", 00:26:33.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.971 "adrfam": "ipv4", 00:26:33.971 "trsvcid": "$NVMF_PORT", 00:26:33.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.971 "hdgst": ${hdgst:-false}, 00:26:33.971 "ddgst": ${ddgst:-false} 00:26:33.971 }, 00:26:33.971 "method": "bdev_nvme_attach_controller" 00:26:33.971 } 00:26:33.971 EOF 00:26:33.971 )") 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.971 { 00:26:33.971 "params": { 00:26:33.971 "name": "Nvme$subsystem", 00:26:33.971 "trtype": "$TEST_TRANSPORT", 00:26:33.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.971 "adrfam": "ipv4", 00:26:33.971 "trsvcid": "$NVMF_PORT", 00:26:33.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.971 "hdgst": ${hdgst:-false}, 00:26:33.971 "ddgst": ${ddgst:-false} 00:26:33.971 }, 00:26:33.971 "method": "bdev_nvme_attach_controller" 00:26:33.971 } 00:26:33.971 EOF 00:26:33.971 )") 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.971 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.972 { 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme$subsystem", 00:26:33.972 "trtype": "$TEST_TRANSPORT", 00:26:33.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "$NVMF_PORT", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.972 "hdgst": ${hdgst:-false}, 00:26:33.972 "ddgst": ${ddgst:-false} 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 } 00:26:33.972 EOF 00:26:33.972 )") 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:33.972 02:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme1", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "10.0.0.2", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 },{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme2", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "10.0.0.2", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 },{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme3", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "10.0.0.2", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 },{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme4", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "10.0.0.2", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 },{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme5", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "10.0.0.2", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false 00:26:33.972 }, 00:26:33.972 "method": "bdev_nvme_attach_controller" 00:26:33.972 },{ 00:26:33.972 "params": { 00:26:33.972 "name": "Nvme6", 00:26:33.972 "trtype": "tcp", 00:26:33.973 "traddr": "10.0.0.2", 00:26:33.973 "adrfam": "ipv4", 00:26:33.973 "trsvcid": "4420", 00:26:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:33.973 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:33.973 "hdgst": false, 00:26:33.973 "ddgst": false 00:26:33.973 }, 00:26:33.973 "method": "bdev_nvme_attach_controller" 00:26:33.973 },{ 00:26:33.973 "params": { 00:26:33.973 "name": "Nvme7", 00:26:33.973 "trtype": "tcp", 00:26:33.973 "traddr": "10.0.0.2", 00:26:33.973 "adrfam": "ipv4", 00:26:33.973 "trsvcid": "4420", 00:26:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:33.973 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:33.973 "hdgst": false, 00:26:33.973 "ddgst": false 00:26:33.973 }, 00:26:33.973 "method": "bdev_nvme_attach_controller" 00:26:33.973 },{ 00:26:33.973 "params": { 00:26:33.973 "name": "Nvme8", 00:26:33.973 "trtype": "tcp", 00:26:33.973 "traddr": "10.0.0.2", 00:26:33.973 "adrfam": "ipv4", 00:26:33.973 "trsvcid": "4420", 00:26:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:33.973 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:33.973 "hdgst": false, 00:26:33.973 "ddgst": false 00:26:33.973 }, 00:26:33.973 "method": "bdev_nvme_attach_controller" 00:26:33.973 },{ 00:26:33.973 "params": { 00:26:33.973 "name": "Nvme9", 00:26:33.973 "trtype": "tcp", 00:26:33.973 "traddr": "10.0.0.2", 00:26:33.973 "adrfam": "ipv4", 00:26:33.973 "trsvcid": "4420", 00:26:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:33.973 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:33.973 "hdgst": false, 00:26:33.973 "ddgst": false 00:26:33.973 }, 00:26:33.973 "method": "bdev_nvme_attach_controller" 00:26:33.973 },{ 00:26:33.973 "params": { 00:26:33.973 "name": "Nvme10", 00:26:33.973 "trtype": "tcp", 00:26:33.973 "traddr": "10.0.0.2", 00:26:33.973 "adrfam": "ipv4", 00:26:33.973 "trsvcid": "4420", 00:26:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:33.973 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:33.973 "hdgst": false, 00:26:33.973 "ddgst": false 00:26:33.973 }, 00:26:33.973 "method": "bdev_nvme_attach_controller" 00:26:33.973 }' 00:26:33.973 [2024-07-27 02:26:01.965295] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:33.973 [2024-07-27 02:26:01.965395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113177 ] 00:26:33.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.973 [2024-07-27 02:26:01.999726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:33.973 [2024-07-27 02:26:02.028745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.230 [2024-07-27 02:26:02.115947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.602 Running I/O for 10 seconds... 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:35.858 02:26:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:35.858 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:35.858 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:35.858 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:35.858 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.858 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.116 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.116 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:36.116 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:36.116 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1113177 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1113177 ']' 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1113177 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113177 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113177' 00:26:36.374 killing process with pid 1113177 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1113177 00:26:36.374 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1113177 00:26:36.374 Received shutdown signal, test time was about 0.935246 seconds 00:26:36.374 00:26:36.374 Latency(us) 00:26:36.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.374 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme1n1 : 0.92 278.48 17.41 0.00 0.00 227022.70 18835.53 253211.69 00:26:36.374 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme2n1 : 0.88 217.52 13.59 0.00 0.00 284611.82 19418.07 256318.58 00:26:36.374 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme3n1 : 0.92 282.47 17.65 0.00 0.00 213327.07 8592.50 256318.58 00:26:36.374 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme4n1 : 0.93 210.85 13.18 0.00 0.00 279830.59 7767.23 268746.15 00:26:36.374 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme5n1 : 0.91 214.09 13.38 0.00 0.00 270406.48 2439.40 257872.02 00:26:36.374 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme6n1 : 0.90 214.40 13.40 0.00 0.00 264518.92 22427.88 233016.89 00:26:36.374 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme7n1 : 0.89 216.14 13.51 0.00 0.00 256180.27 21554.06 254765.13 00:26:36.374 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme8n1 : 0.92 208.12 13.01 0.00 0.00 258833.57 23010.42 288940.94 00:26:36.374 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme9n1 : 0.93 274.48 17.15 0.00 0.00 194150.21 21359.88 254765.13 00:26:36.374 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.374 Verification LBA range: start 0x0 length 0x400 00:26:36.374 Nvme10n1 : 0.90 212.30 13.27 0.00 0.00 243550.69 22330.79 236123.78 00:26:36.374 =================================================================================================================== 00:26:36.374 Total : 2328.84 145.55 0.00 0.00 245850.36 2439.40 288940.94 00:26:36.631 02:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1113045 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.565 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.565 rmmod nvme_tcp 00:26:37.565 rmmod nvme_fabrics 00:26:37.565 rmmod nvme_keyring 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1113045 ']' 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1113045 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1113045 ']' 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1113045 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113045 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113045' 00:26:37.824 killing process with pid 1113045 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1113045 00:26:37.824 02:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1113045 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.390 02:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.298 00:26:40.298 real 0m7.465s 00:26:40.298 user 0m22.136s 00:26:40.298 sys 0m1.460s 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.298 ************************************ 00:26:40.298 END TEST nvmf_shutdown_tc2 00:26:40.298 ************************************ 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.298 ************************************ 00:26:40.298 START TEST nvmf_shutdown_tc3 00:26:40.298 ************************************ 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.298 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.299 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:26:40.558 00:26:40.558 --- 10.0.0.2 ping statistics --- 00:26:40.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.558 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:26:40.558 00:26:40.558 --- 10.0.0.1 ping statistics --- 00:26:40.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.558 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1114087 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1114087 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1114087 ']' 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.558 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.558 [2024-07-27 02:26:08.613771] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:40.558 [2024-07-27 02:26:08.613860] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.558 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.558 [2024-07-27 02:26:08.655245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:40.558 [2024-07-27 02:26:08.681712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.817 [2024-07-27 02:26:08.773156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.817 [2024-07-27 02:26:08.773206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.817 [2024-07-27 02:26:08.773234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.817 [2024-07-27 02:26:08.773246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.817 [2024-07-27 02:26:08.773255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.817 [2024-07-27 02:26:08.773336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.817 [2024-07-27 02:26:08.773411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.817 [2024-07-27 02:26:08.773471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.817 [2024-07-27 02:26:08.773473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.817 [2024-07-27 02:26:08.921421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.817 02:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.076 Malloc1 00:26:41.076 [2024-07-27 02:26:09.001723] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.076 Malloc2 00:26:41.076 Malloc3 00:26:41.076 Malloc4 00:26:41.076 Malloc5 00:26:41.076 Malloc6 00:26:41.334 Malloc7 00:26:41.334 Malloc8 00:26:41.334 Malloc9 00:26:41.334 Malloc10 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1114265 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1114265 /var/tmp/bdevperf.sock 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1114265 ']' 00:26:41.334 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.335 { 00:26:41.335 "params": { 00:26:41.335 "name": "Nvme$subsystem", 00:26:41.335 "trtype": "$TEST_TRANSPORT", 00:26:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.335 "adrfam": "ipv4", 00:26:41.335 "trsvcid": "$NVMF_PORT", 00:26:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.335 "hdgst": ${hdgst:-false}, 00:26:41.335 "ddgst": ${ddgst:-false} 00:26:41.335 }, 00:26:41.335 "method": "bdev_nvme_attach_controller" 00:26:41.335 } 00:26:41.335 EOF 00:26:41.335 )") 00:26:41.335 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.594 { 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme$subsystem", 00:26:41.594 "trtype": "$TEST_TRANSPORT", 00:26:41.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "$NVMF_PORT", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.594 "hdgst": ${hdgst:-false}, 00:26:41.594 "ddgst": ${ddgst:-false} 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 } 00:26:41.594 EOF 00:26:41.594 )") 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.594 { 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme$subsystem", 00:26:41.594 "trtype": "$TEST_TRANSPORT", 00:26:41.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "$NVMF_PORT", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.594 "hdgst": ${hdgst:-false}, 00:26:41.594 "ddgst": ${ddgst:-false} 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 } 00:26:41.594 EOF 00:26:41.594 )") 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.594 { 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme$subsystem", 00:26:41.594 "trtype": "$TEST_TRANSPORT", 00:26:41.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "$NVMF_PORT", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.594 "hdgst": ${hdgst:-false}, 00:26:41.594 "ddgst": ${ddgst:-false} 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 } 00:26:41.594 EOF 00:26:41.594 )") 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:41.594 02:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme1", 00:26:41.594 "trtype": "tcp", 00:26:41.594 "traddr": "10.0.0.2", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "4420", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:41.594 "hdgst": false, 00:26:41.594 "ddgst": false 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 },{ 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme2", 00:26:41.594 "trtype": "tcp", 00:26:41.594 "traddr": "10.0.0.2", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "4420", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:41.594 "hdgst": false, 00:26:41.594 "ddgst": false 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 },{ 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme3", 00:26:41.594 "trtype": "tcp", 00:26:41.594 "traddr": "10.0.0.2", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "4420", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:41.594 "hdgst": false, 00:26:41.594 "ddgst": false 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 },{ 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme4", 00:26:41.594 "trtype": "tcp", 00:26:41.594 "traddr": "10.0.0.2", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "4420", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:41.594 "hdgst": false, 00:26:41.594 "ddgst": false 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.594 },{ 00:26:41.594 "params": { 00:26:41.594 "name": "Nvme5", 00:26:41.594 "trtype": "tcp", 00:26:41.594 "traddr": "10.0.0.2", 00:26:41.594 "adrfam": "ipv4", 00:26:41.594 "trsvcid": "4420", 00:26:41.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:41.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:41.594 "hdgst": false, 00:26:41.594 "ddgst": false 00:26:41.594 }, 00:26:41.594 "method": "bdev_nvme_attach_controller" 00:26:41.595 },{ 00:26:41.595 "params": { 00:26:41.595 "name": "Nvme6", 00:26:41.595 "trtype": "tcp", 00:26:41.595 "traddr": "10.0.0.2", 00:26:41.595 "adrfam": "ipv4", 00:26:41.595 "trsvcid": "4420", 00:26:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:41.595 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:41.595 "hdgst": false, 00:26:41.595 "ddgst": false 00:26:41.595 }, 00:26:41.595 "method": "bdev_nvme_attach_controller" 00:26:41.595 },{ 00:26:41.595 "params": { 00:26:41.595 "name": "Nvme7", 00:26:41.595 "trtype": "tcp", 00:26:41.595 "traddr": "10.0.0.2", 00:26:41.595 "adrfam": "ipv4", 00:26:41.595 "trsvcid": "4420", 00:26:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:41.595 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:41.595 "hdgst": false, 00:26:41.595 "ddgst": false 00:26:41.595 }, 00:26:41.595 "method": "bdev_nvme_attach_controller" 00:26:41.595 },{ 00:26:41.595 "params": { 00:26:41.595 "name": "Nvme8", 00:26:41.595 "trtype": "tcp", 00:26:41.595 "traddr": "10.0.0.2", 00:26:41.595 "adrfam": "ipv4", 00:26:41.595 "trsvcid": "4420", 00:26:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:41.595 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:41.595 "hdgst": false, 00:26:41.595 "ddgst": false 00:26:41.595 }, 00:26:41.595 "method": "bdev_nvme_attach_controller" 00:26:41.595 },{ 00:26:41.595 "params": { 00:26:41.595 "name": "Nvme9", 00:26:41.595 "trtype": "tcp", 00:26:41.595 "traddr": "10.0.0.2", 00:26:41.595 "adrfam": "ipv4", 00:26:41.595 "trsvcid": "4420", 00:26:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:41.595 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:41.595 "hdgst": false, 00:26:41.595 "ddgst": false 00:26:41.595 }, 00:26:41.595 "method": "bdev_nvme_attach_controller" 00:26:41.595 },{ 00:26:41.595 "params": { 00:26:41.595 "name": "Nvme10", 00:26:41.595 "trtype": "tcp", 00:26:41.595 "traddr": "10.0.0.2", 00:26:41.595 "adrfam": "ipv4", 00:26:41.595 "trsvcid": "4420", 00:26:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:41.595 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:41.595 "hdgst": false, 00:26:41.595 "ddgst": false 00:26:41.595 }, 00:26:41.595 "method": "bdev_nvme_attach_controller" 00:26:41.595 }' 00:26:41.595 [2024-07-27 02:26:09.516018] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:41.595 [2024-07-27 02:26:09.516116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114265 ] 00:26:41.595 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.595 [2024-07-27 02:26:09.550567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:41.595 [2024-07-27 02:26:09.579864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.595 [2024-07-27 02:26:09.666643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.497 Running I/O for 10 seconds... 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:43.497 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1114087 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1114087 ']' 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1114087 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.754 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1114087 00:26:44.027 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:44.027 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:44.027 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1114087' 00:26:44.027 killing process with pid 1114087 00:26:44.027 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1114087 00:26:44.027 02:26:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1114087 00:26:44.027 [2024-07-27 02:26:11.934373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.934999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.027 [2024-07-27 02:26:11.935229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.935349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17af0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.936994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.937586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a610 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.028 [2024-07-27 02:26:11.938967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.938980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.938993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.939714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b17fb0 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.029 [2024-07-27 02:26:11.941526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.941992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.942196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18470 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.943618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac830 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.943823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.943939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937ad0 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.943985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176bf10 is same with the state(5) to be set 00:26:44.030 [2024-07-27 02:26:11.944194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.030 [2024-07-27 02:26:11.944245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.030 [2024-07-27 02:26:11.944259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.031 [2024-07-27 02:26:11.944272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.944286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.031 [2024-07-27 02:26:11.944299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.944312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796480 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.946991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.947996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.948507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18950 is same with the state(5) to be set 00:26:44.031 [2024-07-27 02:26:11.949816] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.031 [2024-07-27 02:26:11.949891] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.031 [2024-07-27 02:26:11.949960] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.031 [2024-07-27 02:26:11.950830] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.031 [2024-07-27 02:26:11.953049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.031 [2024-07-27 02:26:11.953086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.031 [2024-07-27 02:26:11.953150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.953169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.031 [2024-07-27 02:26:11.953184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.953199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.031 [2024-07-27 02:26:11.953214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.031 [2024-07-27 02:26:11.953230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.953646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.032 [2024-07-27 02:26:11.953692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.032 [2024-07-27 02:26:11.953701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.032 [2024-07-27 02:26:11.953705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.953732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:12the state(5) to be set 00:26:44.033 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.953846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:12[2024-07-27 02:26:11.953929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.953943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.033 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.953957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.953961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:12the state(5) to be set 00:26:44.033 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.953975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.953977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.953994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:12[2024-07-27 02:26:11.954065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.954085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:12the state(5) to be set 00:26:44.033 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.033 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.033 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.033 [2024-07-27 02:26:11.954274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.033 [2024-07-27 02:26:11.954282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.033 [2024-07-27 02:26:11.954287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:12[2024-07-27 02:26:11.954300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.954315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:12[2024-07-27 02:26:11.954369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.034 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:12the state(5) to be set 00:26:44.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:12the state(5) to be set 00:26:44.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with [2024-07-27 02:26:11.954473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:12the state(5) to be set 00:26:44.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18e10 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.954507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.954983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.954999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.955012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.955032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.955047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.955068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.955084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.955100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.955114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.955130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.034 [2024-07-27 02:26:11.955143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.955158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767e30 is same with the state(5) to be set 00:26:44.034 [2024-07-27 02:26:11.955239] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1767e30 was disconnected and freed. reset controller. 00:26:44.034 [2024-07-27 02:26:11.956206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.034 [2024-07-27 02:26:11.956231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.034 [2024-07-27 02:26:11.956247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.034 [2024-07-27 02:26:11.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ece0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.956388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ac830 (9): Bad file descriptor 00:26:44.035 [2024-07-27 02:26:11.956425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1937ad0 (9): Bad file descriptor 00:26:44.035 [2024-07-27 02:26:11.956456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176bf10 (9): Bad file descriptor 00:26:44.035 [2024-07-27 02:26:11.956506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.035 [2024-07-27 02:26:11.956618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.035 [2024-07-27 02:26:11.956631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790380 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.956671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796480 (9): Bad file descriptor 00:26:44.035 [2024-07-27 02:26:11.958648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:44.035 [2024-07-27 02:26:11.958687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ece0 (9): Bad file descriptor 00:26:44.035 [2024-07-27 02:26:11.959307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.959984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.035 [2024-07-27 02:26:11.959998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.960013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.960016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178ece0 with addr=10.0.0.2, port=4420 00:26:44.035 [2024-07-27 02:26:11.960026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.960034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ece0 is same with the state(5) to be set 00:26:44.035 [2024-07-27 02:26:11.960039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960136] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.036 [2024-07-27 02:26:11.960148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b192f0 is same with the state(5) to be set 00:26:44.036 [2024-07-27 02:26:11.960420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ece0 (9): Bad file descriptor 00:26:44.036 [2024-07-27 02:26:11.960699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:44.036 [2024-07-27 02:26:11.960722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:44.036 [2024-07-27 02:26:11.960738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:44.036 [2024-07-27 02:26:11.960951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.036 [2024-07-27 02:26:11.961670] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.036 [2024-07-27 02:26:11.963158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.963971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.963988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.964003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.964034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.964051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.964072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.036 [2024-07-27 02:26:11.964090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.036 [2024-07-27 02:26:11.964105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-07-27 02:26:11.964545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12the state(5) to be set 00:26:44.037 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.037 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12[2024-07-27 02:26:11.964751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.037 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.037 [2024-07-27 02:26:11.964808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.037 [2024-07-27 02:26:11.964815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.037 [2024-07-27 02:26:11.964820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-07-27 02:26:11.964833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.964847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12the state(5) to be set 00:26:44.038 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.964880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.038 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.964895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.964907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.964921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.964934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.964946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12the state(5) to be set 00:26:44.038 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.964972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.964974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.038 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.964986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.964990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.964999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.965036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.965052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12the state(5) to be set 00:26:44.038 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.965093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.038 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-07-27 02:26:11.965146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with [2024-07-27 02:26:11.965162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:44.038 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.965226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965310] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x187db00 was disconnected and freed. reset controller. 00:26:44.038 [2024-07-27 02:26:11.965318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-27 02:26:11.965432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a150 is same with the state(5) to be set 00:26:44.038 [2024-07-27 02:26:11.965454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.038 [2024-07-27 02:26:11.965499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.038 [2024-07-27 02:26:11.965514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.965976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.965993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.039 [2024-07-27 02:26:11.966659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.039 [2024-07-27 02:26:11.966674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.966977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.040 [2024-07-27 02:26:11.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967530] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x187ed50 was disconnected and freed. reset controller. 00:26:44.040 [2024-07-27 02:26:11.967684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e400 is same with the state(5) to be set 00:26:44.040 [2024-07-27 02:26:11.967844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.967952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.967964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189dbf0 is same with the state(5) to be set 00:26:44.040 [2024-07-27 02:26:11.968005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.968025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.968040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.040 [2024-07-27 02:26:11.968054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.040 [2024-07-27 02:26:11.968080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1261610 is same with the state(5) to be set 00:26:44.041 [2024-07-27 02:26:11.968187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790380 (9): Bad file descriptor 00:26:44.041 [2024-07-27 02:26:11.968241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.041 [2024-07-27 02:26:11.968344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.968357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928050 is same with the state(5) to be set 00:26:44.041 [2024-07-27 02:26:11.970715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:44.041 [2024-07-27 02:26:11.970752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e400 (9): Bad file descriptor 00:26:44.041 [2024-07-27 02:26:11.970816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.970859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.970892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.970922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.970952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.970982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.970995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.041 [2024-07-27 02:26:11.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.041 [2024-07-27 02:26:11.971719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.971996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.042 [2024-07-27 02:26:11.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.042 [2024-07-27 02:26:11.972687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.972703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.972717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.980815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.980891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.980910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.980925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.980941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.980956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.980971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dac70 is same with the state(5) to be set 00:26:44.043 [2024-07-27 02:26:11.982350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.982975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.982991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.043 [2024-07-27 02:26:11.983355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.043 [2024-07-27 02:26:11.983371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.983977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.983994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.984356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.984372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dbfe0 is same with the state(5) to be set 00:26:44.044 [2024-07-27 02:26:11.985602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.985626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.985648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.985680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.985695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.985710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.985724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.044 [2024-07-27 02:26:11.985740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.044 [2024-07-27 02:26:11.985754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.985981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.985997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.045 [2024-07-27 02:26:11.986861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.045 [2024-07-27 02:26:11.986877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.986891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.986907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.986921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.986937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.986951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.986981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.986997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.987563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.987578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dd470 is same with the state(5) to be set 00:26:44.046 [2024-07-27 02:26:11.989019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.046 [2024-07-27 02:26:11.989352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.046 [2024-07-27 02:26:11.989369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.989970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.989985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.047 [2024-07-27 02:26:11.990467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.047 [2024-07-27 02:26:11.990483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.990976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.990990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.991006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.991021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.991037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.991051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.991073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0ac0 is same with the state(5) to be set 00:26:44.048 [2024-07-27 02:26:11.992975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:44.048 [2024-07-27 02:26:11.993025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.048 [2024-07-27 02:26:11.993046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:44.048 [2024-07-27 02:26:11.993109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189dbf0 (9): Bad file descriptor 00:26:44.048 [2024-07-27 02:26:11.993182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1261610 (9): Bad file descriptor 00:26:44.048 [2024-07-27 02:26:11.993218] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.048 [2024-07-27 02:26:11.993257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928050 (9): Bad file descriptor 00:26:44.048 [2024-07-27 02:26:11.993289] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.048 [2024-07-27 02:26:11.993312] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.048 [2024-07-27 02:26:11.994010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:44.048 [2024-07-27 02:26:11.994044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:44.048 [2024-07-27 02:26:11.994069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:44.048 [2024-07-27 02:26:11.994348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.048 [2024-07-27 02:26:11.994379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e400 with addr=10.0.0.2, port=4420 00:26:44.048 [2024-07-27 02:26:11.994397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e400 is same with the state(5) to be set 00:26:44.048 [2024-07-27 02:26:11.994577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.048 [2024-07-27 02:26:11.994601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176bf10 with addr=10.0.0.2, port=4420 00:26:44.048 [2024-07-27 02:26:11.994618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176bf10 is same with the state(5) to be set 00:26:44.048 [2024-07-27 02:26:11.994771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.048 [2024-07-27 02:26:11.994794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1937ad0 with addr=10.0.0.2, port=4420 00:26:44.048 [2024-07-27 02:26:11.994810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937ad0 is same with the state(5) to be set 00:26:44.048 [2024-07-27 02:26:11.995646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.048 [2024-07-27 02:26:11.995918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.048 [2024-07-27 02:26:11.995933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.995949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.995964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.995980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.995994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.996978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.996992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.997009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.997023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.049 [2024-07-27 02:26:11.997041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.049 [2024-07-27 02:26:11.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.050 [2024-07-27 02:26:11.997647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.050 [2024-07-27 02:26:11.997662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18de630 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:11.998989] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:44.050 [2024-07-27 02:26:11.999299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:44.050 [2024-07-27 02:26:11.999517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.050 [2024-07-27 02:26:11.999546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189dbf0 with addr=10.0.0.2, port=4420 00:26:44.050 [2024-07-27 02:26:11.999563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189dbf0 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:11.999705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.050 [2024-07-27 02:26:11.999729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796480 with addr=10.0.0.2, port=4420 00:26:44.050 [2024-07-27 02:26:11.999745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796480 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:11.999901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.050 [2024-07-27 02:26:11.999925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ac830 with addr=10.0.0.2, port=4420 00:26:44.050 [2024-07-27 02:26:11.999941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac830 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:12.000083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.050 [2024-07-27 02:26:12.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178ece0 with addr=10.0.0.2, port=4420 00:26:44.050 [2024-07-27 02:26:12.000124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ece0 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:12.000147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e400 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176bf10 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1937ad0 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.050 [2024-07-27 02:26:12.000510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1790380 with addr=10.0.0.2, port=4420 00:26:44.050 [2024-07-27 02:26:12.000527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790380 is same with the state(5) to be set 00:26:44.050 [2024-07-27 02:26:12.000546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189dbf0 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796480 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ac830 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ece0 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.000618] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:44.050 [2024-07-27 02:26:12.000631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:44.050 [2024-07-27 02:26:12.000647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:44.050 [2024-07-27 02:26:12.000669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.050 [2024-07-27 02:26:12.000684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.050 [2024-07-27 02:26:12.000697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.050 [2024-07-27 02:26:12.000715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:44.050 [2024-07-27 02:26:12.000729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:44.050 [2024-07-27 02:26:12.000742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:44.050 [2024-07-27 02:26:12.001057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.050 [2024-07-27 02:26:12.001087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.050 [2024-07-27 02:26:12.001100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.050 [2024-07-27 02:26:12.001116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790380 (9): Bad file descriptor 00:26:44.050 [2024-07-27 02:26:12.001133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:44.050 [2024-07-27 02:26:12.001147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:44.050 [2024-07-27 02:26:12.001161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:44.051 [2024-07-27 02:26:12.001179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:44.051 [2024-07-27 02:26:12.001193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:44.051 [2024-07-27 02:26:12.001206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:44.051 [2024-07-27 02:26:12.001223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:44.051 [2024-07-27 02:26:12.001236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:44.051 [2024-07-27 02:26:12.001254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:44.051 [2024-07-27 02:26:12.001271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:44.051 [2024-07-27 02:26:12.001285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:44.051 [2024-07-27 02:26:12.001298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:44.051 [2024-07-27 02:26:12.001343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-27 02:26:12.001362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-27 02:26:12.001374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-27 02:26:12.001387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-27 02:26:12.001399] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:44.051 [2024-07-27 02:26:12.001411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:44.051 [2024-07-27 02:26:12.001424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:44.051 [2024-07-27 02:26:12.001469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-27 02:26:12.003097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.051 [2024-07-27 02:26:12.003837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.051 [2024-07-27 02:26:12.003851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.003867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.003881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.003897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.003910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.003926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.003941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.003956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.003970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.003986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.052 [2024-07-27 02:26:12.004968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.052 [2024-07-27 02:26:12.004984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.004998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.005015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.005029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.005045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.005071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.005090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.005112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.005127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18df7f0 is same with the state(5) to be set 00:26:44.053 [2024-07-27 02:26:12.006407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.006978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.053 [2024-07-27 02:26:12.007422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.053 [2024-07-27 02:26:12.007436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.007452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.007466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.007482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.017959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.054 [2024-07-27 02:26:12.018923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.054 [2024-07-27 02:26:12.018938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800f0 is same with the state(5) to be set 00:26:44.054 [2024-07-27 02:26:12.020612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:44.054 task offset: 8192 on job bdev=Nvme4n1 fails 00:26:44.054 00:26:44.054 Latency(us) 00:26:44.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.054 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.054 Job: Nvme1n1 ended in about 0.73 seconds with error 00:26:44.054 Verification LBA range: start 0x0 length 0x400 00:26:44.054 Nvme1n1 : 0.73 174.76 10.92 87.38 0.00 240990.18 21554.06 226803.11 00:26:44.054 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.054 Job: Nvme2n1 ended in about 0.74 seconds with error 00:26:44.054 Verification LBA range: start 0x0 length 0x400 00:26:44.054 Nvme2n1 : 0.74 180.76 11.30 86.98 0.00 230116.99 20388.98 208161.75 00:26:44.054 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.054 Job: Nvme3n1 ended in about 0.74 seconds with error 00:26:44.054 Verification LBA range: start 0x0 length 0x400 00:26:44.054 Nvme3n1 : 0.74 86.61 5.41 86.61 0.00 346986.57 25826.04 296708.17 00:26:44.054 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.054 Job: Nvme4n1 ended in about 0.71 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme4n1 : 0.71 90.37 5.65 90.37 0.00 322657.47 12281.93 358846.01 00:26:44.055 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme5n1 ended in about 0.75 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme5n1 : 0.75 170.88 10.68 85.44 0.00 222780.43 32622.36 219035.88 00:26:44.055 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme6n1 ended in about 0.76 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme6n1 : 0.76 169.20 10.58 84.60 0.00 219319.56 21942.42 231463.44 00:26:44.055 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme7n1 ended in about 0.72 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme7n1 : 0.72 177.82 11.11 88.91 0.00 201236.29 6019.60 287387.50 00:26:44.055 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme8n1 ended in about 0.72 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme8n1 : 0.72 88.77 5.55 88.77 0.00 293741.42 7039.05 407002.83 00:26:44.055 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme9n1 ended in about 0.77 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme9n1 : 0.77 166.17 10.39 83.08 0.00 206402.75 22622.06 228356.55 00:26:44.055 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.055 Job: Nvme10n1 ended in about 0.74 seconds with error 00:26:44.055 Verification LBA range: start 0x0 length 0x400 00:26:44.055 Nvme10n1 : 0.74 86.20 5.39 86.20 0.00 287189.52 21651.15 268746.15 00:26:44.055 =================================================================================================================== 00:26:44.055 Total : 1391.55 86.97 868.35 0.00 248548.03 6019.60 407002.83 00:26:44.055 [2024-07-27 02:26:12.048966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:44.055 [2024-07-27 02:26:12.049042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:44.055 [2024-07-27 02:26:12.049593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.049636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1261610 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.049667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1261610 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.049826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.049852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928050 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.049868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928050 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.049901] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.049926] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.049945] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.049965] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.049984] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.050003] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.050033] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.050053] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.055 [2024-07-27 02:26:12.050630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:44.055 [2024-07-27 02:26:12.050815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1261610 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.050843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928050 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.050909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:44.055 [2024-07-27 02:26:12.051103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.051135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1937ad0 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.051152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937ad0 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.051328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.051354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176bf10 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.051370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176bf10 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.051506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.051530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e400 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.051546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e400 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.051688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.051718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178ece0 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.051734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ece0 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.051878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.051902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ac830 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.051918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ac830 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.052072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.052097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796480 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.052112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796480 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.052272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.052297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189dbf0 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.052313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189dbf0 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.052328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:44.055 [2024-07-27 02:26:12.052341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:44.055 [2024-07-27 02:26:12.052357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:44.055 [2024-07-27 02:26:12.052376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:44.055 [2024-07-27 02:26:12.052390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:44.055 [2024-07-27 02:26:12.052403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:44.055 [2024-07-27 02:26:12.052470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.055 [2024-07-27 02:26:12.052492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.055 [2024-07-27 02:26:12.052634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.055 [2024-07-27 02:26:12.052660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1790380 with addr=10.0.0.2, port=4420 00:26:44.055 [2024-07-27 02:26:12.052676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790380 is same with the state(5) to be set 00:26:44.055 [2024-07-27 02:26:12.052695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1937ad0 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176bf10 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e400 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178ece0 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ac830 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1796480 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189dbf0 (9): Bad file descriptor 00:26:44.055 [2024-07-27 02:26:12.052843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790380 (9): Bad file descriptor 00:26:44.056 [2024-07-27 02:26:12.052865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.052879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.052892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:44.056 [2024-07-27 02:26:12.052909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.052924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.052937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.056 [2024-07-27 02:26:12.052952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.052966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.052984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.053014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.053027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.053057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.053080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.053111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.053124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053139] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.053153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.053166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.056 [2024-07-27 02:26:12.053296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:44.056 [2024-07-27 02:26:12.053308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:44.056 [2024-07-27 02:26:12.053321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:44.056 [2024-07-27 02:26:12.053359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 02:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:44.314 02:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1114265 00:26:45.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1114265) - No such process 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.690 rmmod nvme_tcp 00:26:45.690 rmmod nvme_fabrics 00:26:45.690 rmmod nvme_keyring 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.690 02:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.596 00:26:47.596 real 0m7.229s 00:26:47.596 user 0m16.676s 00:26:47.596 sys 0m1.452s 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:47.596 ************************************ 00:26:47.596 END TEST nvmf_shutdown_tc3 00:26:47.596 ************************************ 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:47.596 00:26:47.596 real 0m26.863s 00:26:47.596 user 1m13.400s 00:26:47.596 sys 0m6.450s 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:47.596 ************************************ 00:26:47.596 END TEST nvmf_shutdown 00:26:47.596 ************************************ 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:26:47.596 00:26:47.596 real 16m48.594s 00:26:47.596 user 47m17.049s 00:26:47.596 sys 3m52.151s 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.596 02:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:47.596 ************************************ 00:26:47.596 END TEST nvmf_target_extra 00:26:47.596 ************************************ 00:26:47.596 02:26:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:47.596 02:26:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:47.596 02:26:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.596 02:26:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.596 ************************************ 00:26:47.596 START TEST nvmf_host 00:26:47.596 ************************************ 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:47.596 * Looking for test storage... 00:26:47.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.596 02:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.854 ************************************ 00:26:47.854 START TEST nvmf_multicontroller 00:26:47.854 ************************************ 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:47.854 * Looking for test storage... 00:26:47.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.854 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.855 02:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.792 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.793 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:26:49.793 00:26:49.793 --- 10.0.0.2 ping statistics --- 00:26:49.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.793 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:26:49.793 00:26:49.793 --- 10.0.0.1 ping statistics --- 00:26:49.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.793 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1116694 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1116694 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1116694 ']' 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.793 02:26:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.052 [2024-07-27 02:26:17.972831] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:50.052 [2024-07-27 02:26:17.972906] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.052 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.052 [2024-07-27 02:26:18.014862] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:50.052 [2024-07-27 02:26:18.041355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.052 [2024-07-27 02:26:18.127077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.052 [2024-07-27 02:26:18.127134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.052 [2024-07-27 02:26:18.127148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.052 [2024-07-27 02:26:18.127167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.052 [2024-07-27 02:26:18.127177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.052 [2024-07-27 02:26:18.127231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.052 [2024-07-27 02:26:18.127294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.052 [2024-07-27 02:26:18.127297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.310 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.310 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:50.310 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.310 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 [2024-07-27 02:26:18.253288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 Malloc0 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 [2024-07-27 02:26:18.309493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 [2024-07-27 02:26:18.317375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 Malloc1 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1116832 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1116832 /var/tmp/bdevperf.sock 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1116832 ']' 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.311 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.569 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.570 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:50.570 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:50.570 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.570 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.830 NVMe0n1 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.830 1 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.830 request: 00:26:50.830 { 00:26:50.830 "name": "NVMe0", 00:26:50.830 "trtype": "tcp", 00:26:50.830 "traddr": "10.0.0.2", 00:26:50.830 "adrfam": "ipv4", 00:26:50.830 "trsvcid": "4420", 00:26:50.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.830 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:50.830 "hostaddr": "10.0.0.2", 00:26:50.830 "hostsvcid": "60000", 00:26:50.830 "prchk_reftag": false, 00:26:50.830 "prchk_guard": false, 00:26:50.830 "hdgst": false, 00:26:50.830 "ddgst": false, 00:26:50.830 "method": "bdev_nvme_attach_controller", 00:26:50.830 "req_id": 1 00:26:50.830 } 00:26:50.830 Got JSON-RPC error response 00:26:50.830 response: 00:26:50.830 { 00:26:50.830 "code": -114, 00:26:50.830 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:50.830 } 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.830 request: 00:26:50.830 { 00:26:50.830 "name": "NVMe0", 00:26:50.830 "trtype": "tcp", 00:26:50.830 "traddr": "10.0.0.2", 00:26:50.830 "adrfam": "ipv4", 00:26:50.830 "trsvcid": "4420", 00:26:50.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:50.830 "hostaddr": "10.0.0.2", 00:26:50.830 "hostsvcid": "60000", 00:26:50.830 "prchk_reftag": false, 00:26:50.830 "prchk_guard": false, 00:26:50.830 "hdgst": false, 00:26:50.830 "ddgst": false, 00:26:50.830 "method": "bdev_nvme_attach_controller", 00:26:50.830 "req_id": 1 00:26:50.830 } 00:26:50.830 Got JSON-RPC error response 00:26:50.830 response: 00:26:50.830 { 00:26:50.830 "code": -114, 00:26:50.830 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:50.830 } 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.830 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.830 request: 00:26:50.830 { 00:26:50.830 "name": "NVMe0", 00:26:50.830 "trtype": "tcp", 00:26:50.830 "traddr": "10.0.0.2", 00:26:50.830 "adrfam": "ipv4", 00:26:50.830 "trsvcid": "4420", 00:26:50.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.830 "hostaddr": "10.0.0.2", 00:26:50.830 "hostsvcid": "60000", 00:26:50.830 "prchk_reftag": false, 00:26:50.831 "prchk_guard": false, 00:26:50.831 "hdgst": false, 00:26:50.831 "ddgst": false, 00:26:50.831 "multipath": "disable", 00:26:50.831 "method": "bdev_nvme_attach_controller", 00:26:50.831 "req_id": 1 00:26:50.831 } 00:26:50.831 Got JSON-RPC error response 00:26:50.831 response: 00:26:50.831 { 00:26:50.831 "code": -114, 00:26:50.831 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:50.831 } 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.831 request: 00:26:50.831 { 00:26:50.831 "name": "NVMe0", 00:26:50.831 "trtype": "tcp", 00:26:50.831 "traddr": "10.0.0.2", 00:26:50.831 "adrfam": "ipv4", 00:26:50.831 "trsvcid": "4420", 00:26:50.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.831 "hostaddr": "10.0.0.2", 00:26:50.831 "hostsvcid": "60000", 00:26:50.831 "prchk_reftag": false, 00:26:50.831 "prchk_guard": false, 00:26:50.831 "hdgst": false, 00:26:50.831 "ddgst": false, 00:26:50.831 "multipath": "failover", 00:26:50.831 "method": "bdev_nvme_attach_controller", 00:26:50.831 "req_id": 1 00:26:50.831 } 00:26:50.831 Got JSON-RPC error response 00:26:50.831 response: 00:26:50.831 { 00:26:50.831 "code": -114, 00:26:50.831 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:50.831 } 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.831 02:26:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:51.090 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:51.090 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:51.090 02:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:52.469 0 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1116832 ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1116832' 00:26:52.469 killing process with pid 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1116832 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:52.469 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:52.469 [2024-07-27 02:26:18.415377] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:52.469 [2024-07-27 02:26:18.415476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116832 ] 00:26:52.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.469 [2024-07-27 02:26:18.448005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:52.469 [2024-07-27 02:26:18.476772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.469 [2024-07-27 02:26:18.563941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.469 [2024-07-27 02:26:19.156855] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 3ad1918c-5dce-4367-9f7b-8ddc9640ff81 already exists 00:26:52.469 [2024-07-27 02:26:19.156896] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:3ad1918c-5dce-4367-9f7b-8ddc9640ff81 alias for bdev NVMe1n1 00:26:52.469 [2024-07-27 02:26:19.156911] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:52.469 Running I/O for 1 seconds... 00:26:52.469 00:26:52.469 Latency(us) 00:26:52.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.469 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:52.469 NVMe0n1 : 1.01 18875.64 73.73 0.00 0.00 6762.89 4077.80 17573.36 00:26:52.469 =================================================================================================================== 00:26:52.469 Total : 18875.64 73.73 0.00 0.00 6762.89 4077.80 17573.36 00:26:52.469 Received shutdown signal, test time was about 1.000000 seconds 00:26:52.469 00:26:52.469 Latency(us) 00:26:52.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.469 =================================================================================================================== 00:26:52.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.469 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.469 rmmod nvme_tcp 00:26:52.469 rmmod nvme_fabrics 00:26:52.469 rmmod nvme_keyring 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1116694 ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1116694 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1116694 ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1116694 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:52.469 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1116694 00:26:52.727 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:52.727 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:52.727 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1116694' 00:26:52.727 killing process with pid 1116694 00:26:52.727 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1116694 00:26:52.727 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1116694 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.987 02:26:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.893 00:26:54.893 real 0m7.185s 00:26:54.893 user 0m11.214s 00:26:54.893 sys 0m2.159s 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:54.893 ************************************ 00:26:54.893 END TEST nvmf_multicontroller 00:26:54.893 ************************************ 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.893 02:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.893 ************************************ 00:26:54.893 START TEST nvmf_aer 00:26:54.893 ************************************ 00:26:54.893 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:54.893 * Looking for test storage... 00:26:55.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.153 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.154 02:26:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.057 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.058 02:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:26:57.058 00:26:57.058 --- 10.0.0.2 ping statistics --- 00:26:57.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.058 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:26:57.058 00:26:57.058 --- 10.0.0.1 ping statistics --- 00:26:57.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.058 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1119039 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1119039 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1119039 ']' 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.058 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.058 [2024-07-27 02:26:25.103193] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:26:57.058 [2024-07-27 02:26:25.103278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.058 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.058 [2024-07-27 02:26:25.150071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.058 [2024-07-27 02:26:25.180680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.317 [2024-07-27 02:26:25.278771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.317 [2024-07-27 02:26:25.278836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.317 [2024-07-27 02:26:25.278854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.317 [2024-07-27 02:26:25.278868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.317 [2024-07-27 02:26:25.278880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.317 [2024-07-27 02:26:25.278941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.317 [2024-07-27 02:26:25.278997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.317 [2024-07-27 02:26:25.279114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.317 [2024-07-27 02:26:25.279118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.317 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.317 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:26:57.317 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.317 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:57.317 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 [2024-07-27 02:26:25.433231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 Malloc0 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.318 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 [2024-07-27 02:26:25.484590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 [ 00:26:57.578 { 00:26:57.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:57.578 "subtype": "Discovery", 00:26:57.578 "listen_addresses": [], 00:26:57.578 "allow_any_host": true, 00:26:57.578 "hosts": [] 00:26:57.578 }, 00:26:57.578 { 00:26:57.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.578 "subtype": "NVMe", 00:26:57.578 "listen_addresses": [ 00:26:57.578 { 00:26:57.578 "trtype": "TCP", 00:26:57.578 "adrfam": "IPv4", 00:26:57.578 "traddr": "10.0.0.2", 00:26:57.578 "trsvcid": "4420" 00:26:57.578 } 00:26:57.578 ], 00:26:57.578 "allow_any_host": true, 00:26:57.578 "hosts": [], 00:26:57.578 "serial_number": "SPDK00000000000001", 00:26:57.578 "model_number": "SPDK bdev Controller", 00:26:57.578 "max_namespaces": 2, 00:26:57.578 "min_cntlid": 1, 00:26:57.578 "max_cntlid": 65519, 00:26:57.578 "namespaces": [ 00:26:57.578 { 00:26:57.578 "nsid": 1, 00:26:57.578 "bdev_name": "Malloc0", 00:26:57.578 "name": "Malloc0", 00:26:57.578 "nguid": "9054E8291A8C4759BB517FA6697B1148", 00:26:57.578 "uuid": "9054e829-1a8c-4759-bb51-7fa6697b1148" 00:26:57.578 } 00:26:57.578 ] 00:26:57.578 } 00:26:57.578 ] 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1119066 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:57.578 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 Malloc1 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 [ 00:26:57.837 { 00:26:57.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:57.837 "subtype": "Discovery", 00:26:57.837 "listen_addresses": [], 00:26:57.837 "allow_any_host": true, 00:26:57.837 "hosts": [] 00:26:57.837 }, 00:26:57.837 { 00:26:57.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.837 "subtype": "NVMe", 00:26:57.837 "listen_addresses": [ 00:26:57.837 { 00:26:57.837 "trtype": "TCP", 00:26:57.837 "adrfam": "IPv4", 00:26:57.837 "traddr": "10.0.0.2", 00:26:57.837 "trsvcid": "4420" 00:26:57.837 } 00:26:57.837 ], 00:26:57.837 "allow_any_host": true, 00:26:57.837 "hosts": [], 00:26:57.837 "serial_number": "SPDK00000000000001", 00:26:57.837 "model_number": "SPDK bdev Controller", 00:26:57.837 "max_namespaces": 2, 00:26:57.837 "min_cntlid": 1, 00:26:57.837 "max_cntlid": 65519, 00:26:57.837 "namespaces": [ 00:26:57.837 { 00:26:57.837 "nsid": 1, 00:26:57.837 "bdev_name": "Malloc0", 00:26:57.837 "name": "Malloc0", 00:26:57.837 "nguid": "9054E8291A8C4759BB517FA6697B1148", 00:26:57.837 "uuid": "9054e829-1a8c-4759-bb51-7fa6697b1148" 00:26:57.837 }, 00:26:57.837 { 00:26:57.837 "nsid": 2, 00:26:57.837 "bdev_name": "Malloc1", 00:26:57.837 "name": "Malloc1", 00:26:57.837 "nguid": "83F9ADFC672C4CE39E5B24EE1681B8D6", 00:26:57.837 "uuid": "83f9adfc-672c-4ce3-9e5b-24ee1681b8d6" 00:26:57.837 } 00:26:57.837 ] 00:26:57.837 } 00:26:57.837 ] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1119066 00:26:57.837 Asynchronous Event Request test 00:26:57.837 Attaching to 10.0.0.2 00:26:57.837 Attached to 10.0.0.2 00:26:57.837 Registering asynchronous event callbacks... 00:26:57.837 Starting namespace attribute notice tests for all controllers... 00:26:57.837 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:57.837 aer_cb - Changed Namespace 00:26:57.837 Cleaning up... 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:57.837 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:57.837 rmmod nvme_tcp 00:26:57.838 rmmod nvme_fabrics 00:26:57.838 rmmod nvme_keyring 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1119039 ']' 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1119039 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1119039 ']' 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1119039 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1119039 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1119039' 00:26:57.838 killing process with pid 1119039 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1119039 00:26:57.838 02:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1119039 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.096 02:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:00.632 00:27:00.632 real 0m5.194s 00:27:00.632 user 0m4.110s 00:27:00.632 sys 0m1.821s 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:00.632 ************************************ 00:27:00.632 END TEST nvmf_aer 00:27:00.632 ************************************ 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.632 ************************************ 00:27:00.632 START TEST nvmf_async_init 00:27:00.632 ************************************ 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:00.632 * Looking for test storage... 00:27:00.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.632 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1cabcf8bd661432a9c6e096595a714dc 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.633 02:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.536 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:27:02.537 00:27:02.537 --- 10.0.0.2 ping statistics --- 00:27:02.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.537 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:02.537 00:27:02.537 --- 10.0.0.1 ping statistics --- 00:27:02.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.537 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1121002 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1121002 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1121002 ']' 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.537 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.537 [2024-07-27 02:26:30.470511] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:27:02.537 [2024-07-27 02:26:30.470581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.537 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.537 [2024-07-27 02:26:30.506880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:02.537 [2024-07-27 02:26:30.539134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.537 [2024-07-27 02:26:30.634015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.537 [2024-07-27 02:26:30.634089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.537 [2024-07-27 02:26:30.634106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.537 [2024-07-27 02:26:30.634135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.537 [2024-07-27 02:26:30.634145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.537 [2024-07-27 02:26:30.634169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 [2024-07-27 02:26:30.769889] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 null0 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1cabcf8bd661432a9c6e096595a714dc 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 [2024-07-27 02:26:30.810168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.797 02:26:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.057 nvme0n1 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.057 [ 00:27:03.057 { 00:27:03.057 "name": "nvme0n1", 00:27:03.057 "aliases": [ 00:27:03.057 "1cabcf8b-d661-432a-9c6e-096595a714dc" 00:27:03.057 ], 00:27:03.057 "product_name": "NVMe disk", 00:27:03.057 "block_size": 512, 00:27:03.057 "num_blocks": 2097152, 00:27:03.057 "uuid": "1cabcf8b-d661-432a-9c6e-096595a714dc", 00:27:03.057 "assigned_rate_limits": { 00:27:03.057 "rw_ios_per_sec": 0, 00:27:03.057 "rw_mbytes_per_sec": 0, 00:27:03.057 "r_mbytes_per_sec": 0, 00:27:03.057 "w_mbytes_per_sec": 0 00:27:03.057 }, 00:27:03.057 "claimed": false, 00:27:03.057 "zoned": false, 00:27:03.057 "supported_io_types": { 00:27:03.057 "read": true, 00:27:03.057 "write": true, 00:27:03.057 "unmap": false, 00:27:03.057 "flush": true, 00:27:03.057 "reset": true, 00:27:03.057 "nvme_admin": true, 00:27:03.057 "nvme_io": true, 00:27:03.057 "nvme_io_md": false, 00:27:03.057 "write_zeroes": true, 00:27:03.057 "zcopy": false, 00:27:03.057 "get_zone_info": false, 00:27:03.057 "zone_management": false, 00:27:03.057 "zone_append": false, 00:27:03.057 "compare": true, 00:27:03.057 "compare_and_write": true, 00:27:03.057 "abort": true, 00:27:03.057 "seek_hole": false, 00:27:03.057 "seek_data": false, 00:27:03.057 "copy": true, 00:27:03.057 "nvme_iov_md": false 00:27:03.057 }, 00:27:03.057 "memory_domains": [ 00:27:03.057 { 00:27:03.057 "dma_device_id": "system", 00:27:03.057 "dma_device_type": 1 00:27:03.057 } 00:27:03.057 ], 00:27:03.057 "driver_specific": { 00:27:03.057 "nvme": [ 00:27:03.057 { 00:27:03.057 "trid": { 00:27:03.057 "trtype": "TCP", 00:27:03.057 "adrfam": "IPv4", 00:27:03.057 "traddr": "10.0.0.2", 00:27:03.057 "trsvcid": "4420", 00:27:03.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.057 }, 00:27:03.057 "ctrlr_data": { 00:27:03.057 "cntlid": 1, 00:27:03.057 "vendor_id": "0x8086", 00:27:03.057 "model_number": "SPDK bdev Controller", 00:27:03.057 "serial_number": "00000000000000000000", 00:27:03.057 "firmware_revision": "24.09", 00:27:03.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.057 "oacs": { 00:27:03.057 "security": 0, 00:27:03.057 "format": 0, 00:27:03.057 "firmware": 0, 00:27:03.057 "ns_manage": 0 00:27:03.057 }, 00:27:03.057 "multi_ctrlr": true, 00:27:03.057 "ana_reporting": false 00:27:03.057 }, 00:27:03.057 "vs": { 00:27:03.057 "nvme_version": "1.3" 00:27:03.057 }, 00:27:03.057 "ns_data": { 00:27:03.057 "id": 1, 00:27:03.057 "can_share": true 00:27:03.057 } 00:27:03.057 } 00:27:03.057 ], 00:27:03.057 "mp_policy": "active_passive" 00:27:03.057 } 00:27:03.057 } 00:27:03.057 ] 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.057 [2024-07-27 02:26:31.062707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:03.057 [2024-07-27 02:26:31.062797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198a850 (9): Bad file descriptor 00:27:03.057 [2024-07-27 02:26:31.205234] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.057 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.057 [ 00:27:03.057 { 00:27:03.057 "name": "nvme0n1", 00:27:03.057 "aliases": [ 00:27:03.057 "1cabcf8b-d661-432a-9c6e-096595a714dc" 00:27:03.057 ], 00:27:03.057 "product_name": "NVMe disk", 00:27:03.057 "block_size": 512, 00:27:03.057 "num_blocks": 2097152, 00:27:03.057 "uuid": "1cabcf8b-d661-432a-9c6e-096595a714dc", 00:27:03.057 "assigned_rate_limits": { 00:27:03.057 "rw_ios_per_sec": 0, 00:27:03.057 "rw_mbytes_per_sec": 0, 00:27:03.057 "r_mbytes_per_sec": 0, 00:27:03.057 "w_mbytes_per_sec": 0 00:27:03.057 }, 00:27:03.057 "claimed": false, 00:27:03.057 "zoned": false, 00:27:03.057 "supported_io_types": { 00:27:03.057 "read": true, 00:27:03.057 "write": true, 00:27:03.057 "unmap": false, 00:27:03.057 "flush": true, 00:27:03.057 "reset": true, 00:27:03.057 "nvme_admin": true, 00:27:03.057 "nvme_io": true, 00:27:03.057 "nvme_io_md": false, 00:27:03.318 "write_zeroes": true, 00:27:03.318 "zcopy": false, 00:27:03.318 "get_zone_info": false, 00:27:03.318 "zone_management": false, 00:27:03.318 "zone_append": false, 00:27:03.318 "compare": true, 00:27:03.318 "compare_and_write": true, 00:27:03.318 "abort": true, 00:27:03.318 "seek_hole": false, 00:27:03.318 "seek_data": false, 00:27:03.318 "copy": true, 00:27:03.318 "nvme_iov_md": false 00:27:03.318 }, 00:27:03.318 "memory_domains": [ 00:27:03.318 { 00:27:03.318 "dma_device_id": "system", 00:27:03.318 "dma_device_type": 1 00:27:03.318 } 00:27:03.318 ], 00:27:03.318 "driver_specific": { 00:27:03.318 "nvme": [ 00:27:03.318 { 00:27:03.318 "trid": { 00:27:03.318 "trtype": "TCP", 00:27:03.318 "adrfam": "IPv4", 00:27:03.318 "traddr": "10.0.0.2", 00:27:03.318 "trsvcid": "4420", 00:27:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.318 }, 00:27:03.318 "ctrlr_data": { 00:27:03.318 "cntlid": 2, 00:27:03.318 "vendor_id": "0x8086", 00:27:03.318 "model_number": "SPDK bdev Controller", 00:27:03.318 "serial_number": "00000000000000000000", 00:27:03.318 "firmware_revision": "24.09", 00:27:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.318 "oacs": { 00:27:03.318 "security": 0, 00:27:03.318 "format": 0, 00:27:03.318 "firmware": 0, 00:27:03.318 "ns_manage": 0 00:27:03.318 }, 00:27:03.318 "multi_ctrlr": true, 00:27:03.318 "ana_reporting": false 00:27:03.318 }, 00:27:03.318 "vs": { 00:27:03.318 "nvme_version": "1.3" 00:27:03.318 }, 00:27:03.318 "ns_data": { 00:27:03.318 "id": 1, 00:27:03.318 "can_share": true 00:27:03.318 } 00:27:03.318 } 00:27:03.318 ], 00:27:03.318 "mp_policy": "active_passive" 00:27:03.318 } 00:27:03.318 } 00:27:03.318 ] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FbbjQ6hzk0 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FbbjQ6hzk0 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 [2024-07-27 02:26:31.259425] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:03.318 [2024-07-27 02:26:31.259622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FbbjQ6hzk0 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 [2024-07-27 02:26:31.267426] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FbbjQ6hzk0 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 [2024-07-27 02:26:31.275457] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:03.318 [2024-07-27 02:26:31.275525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:03.318 nvme0n1 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.318 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.318 [ 00:27:03.318 { 00:27:03.318 "name": "nvme0n1", 00:27:03.318 "aliases": [ 00:27:03.318 "1cabcf8b-d661-432a-9c6e-096595a714dc" 00:27:03.318 ], 00:27:03.318 "product_name": "NVMe disk", 00:27:03.318 "block_size": 512, 00:27:03.318 "num_blocks": 2097152, 00:27:03.318 "uuid": "1cabcf8b-d661-432a-9c6e-096595a714dc", 00:27:03.318 "assigned_rate_limits": { 00:27:03.318 "rw_ios_per_sec": 0, 00:27:03.318 "rw_mbytes_per_sec": 0, 00:27:03.318 "r_mbytes_per_sec": 0, 00:27:03.318 "w_mbytes_per_sec": 0 00:27:03.318 }, 00:27:03.318 "claimed": false, 00:27:03.318 "zoned": false, 00:27:03.318 "supported_io_types": { 00:27:03.318 "read": true, 00:27:03.318 "write": true, 00:27:03.318 "unmap": false, 00:27:03.318 "flush": true, 00:27:03.318 "reset": true, 00:27:03.318 "nvme_admin": true, 00:27:03.318 "nvme_io": true, 00:27:03.318 "nvme_io_md": false, 00:27:03.318 "write_zeroes": true, 00:27:03.318 "zcopy": false, 00:27:03.318 "get_zone_info": false, 00:27:03.318 "zone_management": false, 00:27:03.318 "zone_append": false, 00:27:03.318 "compare": true, 00:27:03.318 "compare_and_write": true, 00:27:03.318 "abort": true, 00:27:03.318 "seek_hole": false, 00:27:03.318 "seek_data": false, 00:27:03.318 "copy": true, 00:27:03.318 "nvme_iov_md": false 00:27:03.318 }, 00:27:03.318 "memory_domains": [ 00:27:03.318 { 00:27:03.318 "dma_device_id": "system", 00:27:03.318 "dma_device_type": 1 00:27:03.318 } 00:27:03.318 ], 00:27:03.318 "driver_specific": { 00:27:03.318 "nvme": [ 00:27:03.318 { 00:27:03.318 "trid": { 00:27:03.318 "trtype": "TCP", 00:27:03.318 "adrfam": "IPv4", 00:27:03.318 "traddr": "10.0.0.2", 00:27:03.318 "trsvcid": "4421", 00:27:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:03.318 }, 00:27:03.318 "ctrlr_data": { 00:27:03.318 "cntlid": 3, 00:27:03.318 "vendor_id": "0x8086", 00:27:03.318 "model_number": "SPDK bdev Controller", 00:27:03.318 "serial_number": "00000000000000000000", 00:27:03.318 "firmware_revision": "24.09", 00:27:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.318 "oacs": { 00:27:03.318 "security": 0, 00:27:03.318 "format": 0, 00:27:03.318 "firmware": 0, 00:27:03.318 "ns_manage": 0 00:27:03.318 }, 00:27:03.318 "multi_ctrlr": true, 00:27:03.318 "ana_reporting": false 00:27:03.318 }, 00:27:03.319 "vs": { 00:27:03.319 "nvme_version": "1.3" 00:27:03.319 }, 00:27:03.319 "ns_data": { 00:27:03.319 "id": 1, 00:27:03.319 "can_share": true 00:27:03.319 } 00:27:03.319 } 00:27:03.319 ], 00:27:03.319 "mp_policy": "active_passive" 00:27:03.319 } 00:27:03.319 } 00:27:03.319 ] 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FbbjQ6hzk0 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.319 rmmod nvme_tcp 00:27:03.319 rmmod nvme_fabrics 00:27:03.319 rmmod nvme_keyring 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1121002 ']' 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1121002 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1121002 ']' 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1121002 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1121002 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1121002' 00:27:03.319 killing process with pid 1121002 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1121002 00:27:03.319 [2024-07-27 02:26:31.458649] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:03.319 [2024-07-27 02:26:31.458688] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:03.319 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1121002 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.578 02:26:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.114 00:27:06.114 real 0m5.473s 00:27:06.114 user 0m2.042s 00:27:06.114 sys 0m1.854s 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 ************************************ 00:27:06.114 END TEST nvmf_async_init 00:27:06.114 ************************************ 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 ************************************ 00:27:06.114 START TEST dma 00:27:06.114 ************************************ 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:06.114 * Looking for test storage... 00:27:06.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:06.114 00:27:06.114 real 0m0.064s 00:27:06.114 user 0m0.034s 00:27:06.114 sys 0m0.036s 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 ************************************ 00:27:06.114 END TEST dma 00:27:06.114 ************************************ 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 ************************************ 00:27:06.114 START TEST nvmf_identify 00:27:06.114 ************************************ 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:06.114 * Looking for test storage... 00:27:06.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.114 02:26:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:08.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:08.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.017 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:08.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:08.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:08.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:27:08.018 00:27:08.018 --- 10.0.0.2 ping statistics --- 00:27:08.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.018 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:08.018 00:27:08.018 --- 10.0.0.1 ping statistics --- 00:27:08.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.018 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1123121 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1123121 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1123121 ']' 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.018 02:26:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.018 [2024-07-27 02:26:35.975471] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:27:08.018 [2024-07-27 02:26:35.975558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.018 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.018 [2024-07-27 02:26:36.013617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.018 [2024-07-27 02:26:36.041493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.018 [2024-07-27 02:26:36.132023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.018 [2024-07-27 02:26:36.132105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.018 [2024-07-27 02:26:36.132129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.018 [2024-07-27 02:26:36.132140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.018 [2024-07-27 02:26:36.132149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.018 [2024-07-27 02:26:36.132199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.018 [2024-07-27 02:26:36.132260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.018 [2024-07-27 02:26:36.132326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.018 [2024-07-27 02:26:36.132328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 [2024-07-27 02:26:36.261492] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 Malloc0 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 [2024-07-27 02:26:36.337193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.281 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:08.282 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.282 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.282 [ 00:27:08.282 { 00:27:08.282 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:08.282 "subtype": "Discovery", 00:27:08.282 "listen_addresses": [ 00:27:08.282 { 00:27:08.282 "trtype": "TCP", 00:27:08.282 "adrfam": "IPv4", 00:27:08.282 "traddr": "10.0.0.2", 00:27:08.282 "trsvcid": "4420" 00:27:08.282 } 00:27:08.282 ], 00:27:08.282 "allow_any_host": true, 00:27:08.282 "hosts": [] 00:27:08.282 }, 00:27:08.282 { 00:27:08.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.282 "subtype": "NVMe", 00:27:08.282 "listen_addresses": [ 00:27:08.282 { 00:27:08.282 "trtype": "TCP", 00:27:08.282 "adrfam": "IPv4", 00:27:08.282 "traddr": "10.0.0.2", 00:27:08.282 "trsvcid": "4420" 00:27:08.282 } 00:27:08.282 ], 00:27:08.282 "allow_any_host": true, 00:27:08.282 "hosts": [], 00:27:08.282 "serial_number": "SPDK00000000000001", 00:27:08.282 "model_number": "SPDK bdev Controller", 00:27:08.282 "max_namespaces": 32, 00:27:08.282 "min_cntlid": 1, 00:27:08.282 "max_cntlid": 65519, 00:27:08.282 "namespaces": [ 00:27:08.282 { 00:27:08.282 "nsid": 1, 00:27:08.282 "bdev_name": "Malloc0", 00:27:08.282 "name": "Malloc0", 00:27:08.282 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:08.282 "eui64": "ABCDEF0123456789", 00:27:08.282 "uuid": "bdd001cc-2f4e-4be5-b692-fc16675eb042" 00:27:08.282 } 00:27:08.282 ] 00:27:08.282 } 00:27:08.282 ] 00:27:08.282 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.282 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:08.282 [2024-07-27 02:26:36.377691] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:27:08.282 [2024-07-27 02:26:36.377737] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123264 ] 00:27:08.282 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.282 [2024-07-27 02:26:36.394843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.282 [2024-07-27 02:26:36.412761] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:08.282 [2024-07-27 02:26:36.412826] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:08.282 [2024-07-27 02:26:36.412836] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:08.282 [2024-07-27 02:26:36.412850] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:08.282 [2024-07-27 02:26:36.412864] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:08.282 [2024-07-27 02:26:36.416102] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:08.282 [2024-07-27 02:26:36.416159] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2001630 0 00:27:08.282 [2024-07-27 02:26:36.423082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:08.282 [2024-07-27 02:26:36.423109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:08.282 [2024-07-27 02:26:36.423119] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:08.282 [2024-07-27 02:26:36.423126] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:08.282 [2024-07-27 02:26:36.423180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.423193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.423201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.282 [2024-07-27 02:26:36.423219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:08.282 [2024-07-27 02:26:36.423247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.282 [2024-07-27 02:26:36.431085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.282 [2024-07-27 02:26:36.431104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.282 [2024-07-27 02:26:36.431112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.282 [2024-07-27 02:26:36.431136] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:08.282 [2024-07-27 02:26:36.431147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:08.282 [2024-07-27 02:26:36.431157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:08.282 [2024-07-27 02:26:36.431180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.282 [2024-07-27 02:26:36.431207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.282 [2024-07-27 02:26:36.431231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.282 [2024-07-27 02:26:36.431399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.282 [2024-07-27 02:26:36.431412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.282 [2024-07-27 02:26:36.431419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.282 [2024-07-27 02:26:36.431439] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:08.282 [2024-07-27 02:26:36.431453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:08.282 [2024-07-27 02:26:36.431466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.282 [2024-07-27 02:26:36.431496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.282 [2024-07-27 02:26:36.431517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.282 [2024-07-27 02:26:36.431697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.282 [2024-07-27 02:26:36.431713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.282 [2024-07-27 02:26:36.431720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.282 [2024-07-27 02:26:36.431736] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:08.282 [2024-07-27 02:26:36.431751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:08.282 [2024-07-27 02:26:36.431764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.282 [2024-07-27 02:26:36.431789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.282 [2024-07-27 02:26:36.431810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.282 [2024-07-27 02:26:36.431955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.282 [2024-07-27 02:26:36.431970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.282 [2024-07-27 02:26:36.431977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.431984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.282 [2024-07-27 02:26:36.431993] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:08.282 [2024-07-27 02:26:36.432010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.432019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.432026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.282 [2024-07-27 02:26:36.432037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.282 [2024-07-27 02:26:36.432064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.282 [2024-07-27 02:26:36.432203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.282 [2024-07-27 02:26:36.432218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.282 [2024-07-27 02:26:36.432225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.282 [2024-07-27 02:26:36.432232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.282 [2024-07-27 02:26:36.432241] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:08.282 [2024-07-27 02:26:36.432249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:08.282 [2024-07-27 02:26:36.432262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:08.283 [2024-07-27 02:26:36.432373] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:08.283 [2024-07-27 02:26:36.432381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:08.283 [2024-07-27 02:26:36.432400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.283 [2024-07-27 02:26:36.432426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.283 [2024-07-27 02:26:36.432447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.283 [2024-07-27 02:26:36.432621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.283 [2024-07-27 02:26:36.432633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.283 [2024-07-27 02:26:36.432640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.283 [2024-07-27 02:26:36.432656] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:08.283 [2024-07-27 02:26:36.432672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.283 [2024-07-27 02:26:36.432698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.283 [2024-07-27 02:26:36.432719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.283 [2024-07-27 02:26:36.432857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.283 [2024-07-27 02:26:36.432869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.283 [2024-07-27 02:26:36.432875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.283 [2024-07-27 02:26:36.432890] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:08.283 [2024-07-27 02:26:36.432899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:08.283 [2024-07-27 02:26:36.432912] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:08.283 [2024-07-27 02:26:36.432927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:08.283 [2024-07-27 02:26:36.432942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.432950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.283 [2024-07-27 02:26:36.432961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.283 [2024-07-27 02:26:36.432981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.283 [2024-07-27 02:26:36.433223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.283 [2024-07-27 02:26:36.433239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.283 [2024-07-27 02:26:36.433246] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.433253] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2001630): datao=0, datal=4096, cccid=0 00:27:08.283 [2024-07-27 02:26:36.433261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x204ff80) on tqpair(0x2001630): expected_datao=0, payload_size=4096 00:27:08.283 [2024-07-27 02:26:36.433274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.433292] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.283 [2024-07-27 02:26:36.433302] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.562 [2024-07-27 02:26:36.476074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.562 [2024-07-27 02:26:36.476093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.562 [2024-07-27 02:26:36.476100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.562 [2024-07-27 02:26:36.476107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.562 [2024-07-27 02:26:36.476119] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:08.562 [2024-07-27 02:26:36.476128] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:08.562 [2024-07-27 02:26:36.476135] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:08.562 [2024-07-27 02:26:36.476144] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:08.562 [2024-07-27 02:26:36.476152] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:08.562 [2024-07-27 02:26:36.476160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:08.562 [2024-07-27 02:26:36.476175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:08.562 [2024-07-27 02:26:36.476207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.562 [2024-07-27 02:26:36.476216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.562 [2024-07-27 02:26:36.476223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.563 [2024-07-27 02:26:36.476259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.563 [2024-07-27 02:26:36.476440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.476453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.476460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.563 [2024-07-27 02:26:36.476479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.563 [2024-07-27 02:26:36.476514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.563 [2024-07-27 02:26:36.476547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.563 [2024-07-27 02:26:36.476585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.563 [2024-07-27 02:26:36.476616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:08.563 [2024-07-27 02:26:36.476635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:08.563 [2024-07-27 02:26:36.476648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.476666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.563 [2024-07-27 02:26:36.476689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204ff80, cid 0, qid 0 00:27:08.563 [2024-07-27 02:26:36.476701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050100, cid 1, qid 0 00:27:08.563 [2024-07-27 02:26:36.476709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050280, cid 2, qid 0 00:27:08.563 [2024-07-27 02:26:36.476717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.563 [2024-07-27 02:26:36.476724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050580, cid 4, qid 0 00:27:08.563 [2024-07-27 02:26:36.476929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.476944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.476951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.476958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050580) on tqpair=0x2001630 00:27:08.563 [2024-07-27 02:26:36.476967] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:08.563 [2024-07-27 02:26:36.476976] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:08.563 [2024-07-27 02:26:36.476994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.477014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.563 [2024-07-27 02:26:36.477035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050580, cid 4, qid 0 00:27:08.563 [2024-07-27 02:26:36.477213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.563 [2024-07-27 02:26:36.477227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.563 [2024-07-27 02:26:36.477234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477241] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2001630): datao=0, datal=4096, cccid=4 00:27:08.563 [2024-07-27 02:26:36.477249] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2050580) on tqpair(0x2001630): expected_datao=0, payload_size=4096 00:27:08.563 [2024-07-27 02:26:36.477257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477287] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477296] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.477409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.477416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050580) on tqpair=0x2001630 00:27:08.563 [2024-07-27 02:26:36.477441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:08.563 [2024-07-27 02:26:36.477479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.477501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.563 [2024-07-27 02:26:36.477512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.477536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.563 [2024-07-27 02:26:36.477563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050580, cid 4, qid 0 00:27:08.563 [2024-07-27 02:26:36.477575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050700, cid 5, qid 0 00:27:08.563 [2024-07-27 02:26:36.477783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.563 [2024-07-27 02:26:36.477795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.563 [2024-07-27 02:26:36.477802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2001630): datao=0, datal=1024, cccid=4 00:27:08.563 [2024-07-27 02:26:36.477817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2050580) on tqpair(0x2001630): expected_datao=0, payload_size=1024 00:27:08.563 [2024-07-27 02:26:36.477825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477835] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477842] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.477861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.477868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.477875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050700) on tqpair=0x2001630 00:27:08.563 [2024-07-27 02:26:36.519214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.519233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.519242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.519249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050580) on tqpair=0x2001630 00:27:08.563 [2024-07-27 02:26:36.519267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.519276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2001630) 00:27:08.563 [2024-07-27 02:26:36.519288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.563 [2024-07-27 02:26:36.519318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050580, cid 4, qid 0 00:27:08.563 [2024-07-27 02:26:36.519482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.563 [2024-07-27 02:26:36.519498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.563 [2024-07-27 02:26:36.519512] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.519520] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2001630): datao=0, datal=3072, cccid=4 00:27:08.563 [2024-07-27 02:26:36.519528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2050580) on tqpair(0x2001630): expected_datao=0, payload_size=3072 00:27:08.563 [2024-07-27 02:26:36.519536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.519557] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.519567] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.563 [2024-07-27 02:26:36.561197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.563 [2024-07-27 02:26:36.561216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.563 [2024-07-27 02:26:36.561224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.561232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050580) on tqpair=0x2001630 00:27:08.564 [2024-07-27 02:26:36.561248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.561257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2001630) 00:27:08.564 [2024-07-27 02:26:36.561268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.564 [2024-07-27 02:26:36.561298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050580, cid 4, qid 0 00:27:08.564 [2024-07-27 02:26:36.561456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.564 [2024-07-27 02:26:36.561469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.564 [2024-07-27 02:26:36.561476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.561483] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2001630): datao=0, datal=8, cccid=4 00:27:08.564 [2024-07-27 02:26:36.561491] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2050580) on tqpair(0x2001630): expected_datao=0, payload_size=8 00:27:08.564 [2024-07-27 02:26:36.561499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.561509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.561517] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.607079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.564 [2024-07-27 02:26:36.607098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.564 [2024-07-27 02:26:36.607106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.564 [2024-07-27 02:26:36.607114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050580) on tqpair=0x2001630 00:27:08.564 ===================================================== 00:27:08.564 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:08.564 ===================================================== 00:27:08.564 Controller Capabilities/Features 00:27:08.564 ================================ 00:27:08.564 Vendor ID: 0000 00:27:08.564 Subsystem Vendor ID: 0000 00:27:08.564 Serial Number: .................... 00:27:08.564 Model Number: ........................................ 00:27:08.564 Firmware Version: 24.09 00:27:08.564 Recommended Arb Burst: 0 00:27:08.564 IEEE OUI Identifier: 00 00 00 00:27:08.564 Multi-path I/O 00:27:08.564 May have multiple subsystem ports: No 00:27:08.564 May have multiple controllers: No 00:27:08.564 Associated with SR-IOV VF: No 00:27:08.564 Max Data Transfer Size: 131072 00:27:08.564 Max Number of Namespaces: 0 00:27:08.564 Max Number of I/O Queues: 1024 00:27:08.564 NVMe Specification Version (VS): 1.3 00:27:08.564 NVMe Specification Version (Identify): 1.3 00:27:08.564 Maximum Queue Entries: 128 00:27:08.564 Contiguous Queues Required: Yes 00:27:08.564 Arbitration Mechanisms Supported 00:27:08.564 Weighted Round Robin: Not Supported 00:27:08.564 Vendor Specific: Not Supported 00:27:08.564 Reset Timeout: 15000 ms 00:27:08.564 Doorbell Stride: 4 bytes 00:27:08.564 NVM Subsystem Reset: Not Supported 00:27:08.564 Command Sets Supported 00:27:08.564 NVM Command Set: Supported 00:27:08.564 Boot Partition: Not Supported 00:27:08.564 Memory Page Size Minimum: 4096 bytes 00:27:08.564 Memory Page Size Maximum: 4096 bytes 00:27:08.564 Persistent Memory Region: Not Supported 00:27:08.564 Optional Asynchronous Events Supported 00:27:08.564 Namespace Attribute Notices: Not Supported 00:27:08.564 Firmware Activation Notices: Not Supported 00:27:08.564 ANA Change Notices: Not Supported 00:27:08.564 PLE Aggregate Log Change Notices: Not Supported 00:27:08.564 LBA Status Info Alert Notices: Not Supported 00:27:08.564 EGE Aggregate Log Change Notices: Not Supported 00:27:08.564 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.564 Zone Descriptor Change Notices: Not Supported 00:27:08.564 Discovery Log Change Notices: Supported 00:27:08.564 Controller Attributes 00:27:08.564 128-bit Host Identifier: Not Supported 00:27:08.564 Non-Operational Permissive Mode: Not Supported 00:27:08.564 NVM Sets: Not Supported 00:27:08.564 Read Recovery Levels: Not Supported 00:27:08.564 Endurance Groups: Not Supported 00:27:08.564 Predictable Latency Mode: Not Supported 00:27:08.564 Traffic Based Keep ALive: Not Supported 00:27:08.564 Namespace Granularity: Not Supported 00:27:08.564 SQ Associations: Not Supported 00:27:08.564 UUID List: Not Supported 00:27:08.564 Multi-Domain Subsystem: Not Supported 00:27:08.564 Fixed Capacity Management: Not Supported 00:27:08.564 Variable Capacity Management: Not Supported 00:27:08.564 Delete Endurance Group: Not Supported 00:27:08.564 Delete NVM Set: Not Supported 00:27:08.564 Extended LBA Formats Supported: Not Supported 00:27:08.564 Flexible Data Placement Supported: Not Supported 00:27:08.564 00:27:08.564 Controller Memory Buffer Support 00:27:08.564 ================================ 00:27:08.564 Supported: No 00:27:08.564 00:27:08.564 Persistent Memory Region Support 00:27:08.564 ================================ 00:27:08.564 Supported: No 00:27:08.564 00:27:08.564 Admin Command Set Attributes 00:27:08.564 ============================ 00:27:08.564 Security Send/Receive: Not Supported 00:27:08.564 Format NVM: Not Supported 00:27:08.564 Firmware Activate/Download: Not Supported 00:27:08.564 Namespace Management: Not Supported 00:27:08.564 Device Self-Test: Not Supported 00:27:08.564 Directives: Not Supported 00:27:08.564 NVMe-MI: Not Supported 00:27:08.564 Virtualization Management: Not Supported 00:27:08.564 Doorbell Buffer Config: Not Supported 00:27:08.564 Get LBA Status Capability: Not Supported 00:27:08.564 Command & Feature Lockdown Capability: Not Supported 00:27:08.564 Abort Command Limit: 1 00:27:08.564 Async Event Request Limit: 4 00:27:08.564 Number of Firmware Slots: N/A 00:27:08.564 Firmware Slot 1 Read-Only: N/A 00:27:08.564 Firmware Activation Without Reset: N/A 00:27:08.564 Multiple Update Detection Support: N/A 00:27:08.564 Firmware Update Granularity: No Information Provided 00:27:08.564 Per-Namespace SMART Log: No 00:27:08.564 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.564 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:08.564 Command Effects Log Page: Not Supported 00:27:08.564 Get Log Page Extended Data: Supported 00:27:08.564 Telemetry Log Pages: Not Supported 00:27:08.564 Persistent Event Log Pages: Not Supported 00:27:08.564 Supported Log Pages Log Page: May Support 00:27:08.564 Commands Supported & Effects Log Page: Not Supported 00:27:08.564 Feature Identifiers & Effects Log Page:May Support 00:27:08.564 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.564 Data Area 4 for Telemetry Log: Not Supported 00:27:08.564 Error Log Page Entries Supported: 128 00:27:08.564 Keep Alive: Not Supported 00:27:08.564 00:27:08.564 NVM Command Set Attributes 00:27:08.564 ========================== 00:27:08.564 Submission Queue Entry Size 00:27:08.564 Max: 1 00:27:08.564 Min: 1 00:27:08.564 Completion Queue Entry Size 00:27:08.564 Max: 1 00:27:08.564 Min: 1 00:27:08.564 Number of Namespaces: 0 00:27:08.564 Compare Command: Not Supported 00:27:08.564 Write Uncorrectable Command: Not Supported 00:27:08.564 Dataset Management Command: Not Supported 00:27:08.564 Write Zeroes Command: Not Supported 00:27:08.564 Set Features Save Field: Not Supported 00:27:08.564 Reservations: Not Supported 00:27:08.564 Timestamp: Not Supported 00:27:08.564 Copy: Not Supported 00:27:08.564 Volatile Write Cache: Not Present 00:27:08.564 Atomic Write Unit (Normal): 1 00:27:08.564 Atomic Write Unit (PFail): 1 00:27:08.564 Atomic Compare & Write Unit: 1 00:27:08.564 Fused Compare & Write: Supported 00:27:08.564 Scatter-Gather List 00:27:08.564 SGL Command Set: Supported 00:27:08.564 SGL Keyed: Supported 00:27:08.564 SGL Bit Bucket Descriptor: Not Supported 00:27:08.564 SGL Metadata Pointer: Not Supported 00:27:08.564 Oversized SGL: Not Supported 00:27:08.564 SGL Metadata Address: Not Supported 00:27:08.564 SGL Offset: Supported 00:27:08.564 Transport SGL Data Block: Not Supported 00:27:08.564 Replay Protected Memory Block: Not Supported 00:27:08.564 00:27:08.564 Firmware Slot Information 00:27:08.564 ========================= 00:27:08.564 Active slot: 0 00:27:08.564 00:27:08.564 00:27:08.564 Error Log 00:27:08.564 ========= 00:27:08.564 00:27:08.564 Active Namespaces 00:27:08.564 ================= 00:27:08.564 Discovery Log Page 00:27:08.565 ================== 00:27:08.565 Generation Counter: 2 00:27:08.565 Number of Records: 2 00:27:08.565 Record Format: 0 00:27:08.565 00:27:08.565 Discovery Log Entry 0 00:27:08.565 ---------------------- 00:27:08.565 Transport Type: 3 (TCP) 00:27:08.565 Address Family: 1 (IPv4) 00:27:08.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:08.565 Entry Flags: 00:27:08.565 Duplicate Returned Information: 1 00:27:08.565 Explicit Persistent Connection Support for Discovery: 1 00:27:08.565 Transport Requirements: 00:27:08.565 Secure Channel: Not Required 00:27:08.565 Port ID: 0 (0x0000) 00:27:08.565 Controller ID: 65535 (0xffff) 00:27:08.565 Admin Max SQ Size: 128 00:27:08.565 Transport Service Identifier: 4420 00:27:08.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:08.565 Transport Address: 10.0.0.2 00:27:08.565 Discovery Log Entry 1 00:27:08.565 ---------------------- 00:27:08.565 Transport Type: 3 (TCP) 00:27:08.565 Address Family: 1 (IPv4) 00:27:08.565 Subsystem Type: 2 (NVM Subsystem) 00:27:08.565 Entry Flags: 00:27:08.565 Duplicate Returned Information: 0 00:27:08.565 Explicit Persistent Connection Support for Discovery: 0 00:27:08.565 Transport Requirements: 00:27:08.565 Secure Channel: Not Required 00:27:08.565 Port ID: 0 (0x0000) 00:27:08.565 Controller ID: 65535 (0xffff) 00:27:08.565 Admin Max SQ Size: 128 00:27:08.565 Transport Service Identifier: 4420 00:27:08.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:08.565 Transport Address: 10.0.0.2 [2024-07-27 02:26:36.607223] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:08.565 [2024-07-27 02:26:36.607245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x204ff80) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.565 [2024-07-27 02:26:36.607267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050100) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.565 [2024-07-27 02:26:36.607284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050280) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.565 [2024-07-27 02:26:36.607300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.565 [2024-07-27 02:26:36.607330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.607358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.607383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.607547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.607563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.607570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.607614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.607641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.607801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.607816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.607823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.607839] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:08.565 [2024-07-27 02:26:36.607848] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:08.565 [2024-07-27 02:26:36.607864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.607880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.607891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.607912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.608091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.608105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.608113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.608137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.608163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.608184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.608328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.608343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.608355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.608379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.608406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.608427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.608566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.608581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.608588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.608612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.608639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.608660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.608795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.608811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.608818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.608841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.608858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.565 [2024-07-27 02:26:36.608868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.565 [2024-07-27 02:26:36.608889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.565 [2024-07-27 02:26:36.609020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.565 [2024-07-27 02:26:36.609032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.565 [2024-07-27 02:26:36.609040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.565 [2024-07-27 02:26:36.609047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.565 [2024-07-27 02:26:36.609069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.609097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.609118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.609256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.609271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.609278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.609306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.609334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.609355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.609491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.609507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.609514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.609537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.609564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.609585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.609725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.609741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.609748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.609771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.609798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.609819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.609949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.609962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.609969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.609976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.609992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.610018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.610039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.610181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.610196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.610203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.610231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.610259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.610280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.610411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.610423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.610430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.610454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.610480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.610500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.610657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.610672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.610679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.610703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.610730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.610751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.610904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.610916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.610924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.610947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.610963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.610974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.610994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.615073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.615103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.615110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.615118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.615136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.615150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.615158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2001630) 00:27:08.566 [2024-07-27 02:26:36.615169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.566 [2024-07-27 02:26:36.615191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2050400, cid 3, qid 0 00:27:08.566 [2024-07-27 02:26:36.615346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.566 [2024-07-27 02:26:36.615358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.566 [2024-07-27 02:26:36.615366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.566 [2024-07-27 02:26:36.615373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2050400) on tqpair=0x2001630 00:27:08.566 [2024-07-27 02:26:36.615386] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:08.566 00:27:08.566 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:08.566 [2024-07-27 02:26:36.651537] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:27:08.566 [2024-07-27 02:26:36.651585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123268 ] 00:27:08.566 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.567 [2024-07-27 02:26:36.669715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.567 [2024-07-27 02:26:36.687393] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:08.567 [2024-07-27 02:26:36.687442] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:08.567 [2024-07-27 02:26:36.687451] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:08.567 [2024-07-27 02:26:36.687466] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:08.567 [2024-07-27 02:26:36.687480] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:08.567 [2024-07-27 02:26:36.687765] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:08.567 [2024-07-27 02:26:36.687805] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x80e630 0 00:27:08.567 [2024-07-27 02:26:36.698073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:08.567 [2024-07-27 02:26:36.698095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:08.567 [2024-07-27 02:26:36.698106] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:08.567 [2024-07-27 02:26:36.698112] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:08.567 [2024-07-27 02:26:36.698154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.698166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.698173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.698188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:08.567 [2024-07-27 02:26:36.698224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.567 [2024-07-27 02:26:36.706082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.567 [2024-07-27 02:26:36.706111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.567 [2024-07-27 02:26:36.706119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.567 [2024-07-27 02:26:36.706145] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:08.567 [2024-07-27 02:26:36.706156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:08.567 [2024-07-27 02:26:36.706166] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:08.567 [2024-07-27 02:26:36.706184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.706211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.567 [2024-07-27 02:26:36.706236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.567 [2024-07-27 02:26:36.706395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.567 [2024-07-27 02:26:36.706410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.567 [2024-07-27 02:26:36.706418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.567 [2024-07-27 02:26:36.706437] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:08.567 [2024-07-27 02:26:36.706452] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:08.567 [2024-07-27 02:26:36.706464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.706490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.567 [2024-07-27 02:26:36.706512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.567 [2024-07-27 02:26:36.706654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.567 [2024-07-27 02:26:36.706668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.567 [2024-07-27 02:26:36.706676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.567 [2024-07-27 02:26:36.706692] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:08.567 [2024-07-27 02:26:36.706706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:08.567 [2024-07-27 02:26:36.706718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.706743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.567 [2024-07-27 02:26:36.706765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.567 [2024-07-27 02:26:36.706904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.567 [2024-07-27 02:26:36.706920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.567 [2024-07-27 02:26:36.706928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.567 [2024-07-27 02:26:36.706944] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:08.567 [2024-07-27 02:26:36.706960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.706976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.706987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.567 [2024-07-27 02:26:36.707009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.567 [2024-07-27 02:26:36.707179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.567 [2024-07-27 02:26:36.707196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.567 [2024-07-27 02:26:36.707203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.707210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.567 [2024-07-27 02:26:36.707219] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:08.567 [2024-07-27 02:26:36.707227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:08.567 [2024-07-27 02:26:36.707241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:08.567 [2024-07-27 02:26:36.707351] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:08.567 [2024-07-27 02:26:36.707358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:08.567 [2024-07-27 02:26:36.707371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.707379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.567 [2024-07-27 02:26:36.707386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.567 [2024-07-27 02:26:36.707397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.568 [2024-07-27 02:26:36.707420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.568 [2024-07-27 02:26:36.707557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.707569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.568 [2024-07-27 02:26:36.707576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.707583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.568 [2024-07-27 02:26:36.707592] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:08.568 [2024-07-27 02:26:36.707608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.707617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.707624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.707634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.568 [2024-07-27 02:26:36.707656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.568 [2024-07-27 02:26:36.707798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.707816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.568 [2024-07-27 02:26:36.707824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.707831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.568 [2024-07-27 02:26:36.707839] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:08.568 [2024-07-27 02:26:36.707848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.707861] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:08.568 [2024-07-27 02:26:36.707875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.707889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.707897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.707908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.568 [2024-07-27 02:26:36.707930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.568 [2024-07-27 02:26:36.708113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.568 [2024-07-27 02:26:36.708127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.568 [2024-07-27 02:26:36.708134] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708141] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=4096, cccid=0 00:27:08.568 [2024-07-27 02:26:36.708149] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85cf80) on tqpair(0x80e630): expected_datao=0, payload_size=4096 00:27:08.568 [2024-07-27 02:26:36.708157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708189] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708198] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.708313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.568 [2024-07-27 02:26:36.708320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.568 [2024-07-27 02:26:36.708338] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:08.568 [2024-07-27 02:26:36.708346] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:08.568 [2024-07-27 02:26:36.708354] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:08.568 [2024-07-27 02:26:36.708361] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:08.568 [2024-07-27 02:26:36.708369] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:08.568 [2024-07-27 02:26:36.708378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.708392] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.708409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.568 [2024-07-27 02:26:36.708462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.568 [2024-07-27 02:26:36.708620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.708634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.568 [2024-07-27 02:26:36.708641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.568 [2024-07-27 02:26:36.708660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.568 [2024-07-27 02:26:36.708695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.568 [2024-07-27 02:26:36.708728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.568 [2024-07-27 02:26:36.708761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.568 [2024-07-27 02:26:36.708793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.708812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.708825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.708832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.708843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.568 [2024-07-27 02:26:36.708866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85cf80, cid 0, qid 0 00:27:08.568 [2024-07-27 02:26:36.708877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d100, cid 1, qid 0 00:27:08.568 [2024-07-27 02:26:36.708886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d280, cid 2, qid 0 00:27:08.568 [2024-07-27 02:26:36.708893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.568 [2024-07-27 02:26:36.708901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.568 [2024-07-27 02:26:36.709071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.709085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.568 [2024-07-27 02:26:36.709096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.709103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.568 [2024-07-27 02:26:36.709112] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:08.568 [2024-07-27 02:26:36.709121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.709139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.709151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:08.568 [2024-07-27 02:26:36.709162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.709170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.568 [2024-07-27 02:26:36.709177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.568 [2024-07-27 02:26:36.709188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.568 [2024-07-27 02:26:36.709209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.568 [2024-07-27 02:26:36.709365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.568 [2024-07-27 02:26:36.709377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.569 [2024-07-27 02:26:36.709384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.569 [2024-07-27 02:26:36.709391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.569 [2024-07-27 02:26:36.709459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:08.569 [2024-07-27 02:26:36.709478] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:08.569 [2024-07-27 02:26:36.709492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.569 [2024-07-27 02:26:36.709500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.569 [2024-07-27 02:26:36.709511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.569 [2024-07-27 02:26:36.709532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.569 [2024-07-27 02:26:36.709686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.569 [2024-07-27 02:26:36.709701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.569 [2024-07-27 02:26:36.709708] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.569 [2024-07-27 02:26:36.709715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=4096, cccid=4 00:27:08.569 [2024-07-27 02:26:36.709723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d580) on tqpair(0x80e630): expected_datao=0, payload_size=4096 00:27:08.569 [2024-07-27 02:26:36.709731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.569 [2024-07-27 02:26:36.709756] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.569 [2024-07-27 02:26:36.709766] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.709865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.831 [2024-07-27 02:26:36.709877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.831 [2024-07-27 02:26:36.709885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.709893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.831 [2024-07-27 02:26:36.709912] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:08.831 [2024-07-27 02:26:36.709935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:08.831 [2024-07-27 02:26:36.709953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:08.831 [2024-07-27 02:26:36.709966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.709974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.831 [2024-07-27 02:26:36.709985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.831 [2024-07-27 02:26:36.710007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.831 [2024-07-27 02:26:36.714073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.831 [2024-07-27 02:26:36.714090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.831 [2024-07-27 02:26:36.714098] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.714105] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=4096, cccid=4 00:27:08.831 [2024-07-27 02:26:36.714113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d580) on tqpair(0x80e630): expected_datao=0, payload_size=4096 00:27:08.831 [2024-07-27 02:26:36.714121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.714132] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.714140] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.831 [2024-07-27 02:26:36.754090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.831 [2024-07-27 02:26:36.754098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.831 [2024-07-27 02:26:36.754128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:08.831 [2024-07-27 02:26:36.754148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:08.831 [2024-07-27 02:26:36.754162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.831 [2024-07-27 02:26:36.754181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.831 [2024-07-27 02:26:36.754205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.831 [2024-07-27 02:26:36.754356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.831 [2024-07-27 02:26:36.754371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.831 [2024-07-27 02:26:36.754378] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754384] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=4096, cccid=4 00:27:08.831 [2024-07-27 02:26:36.754392] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d580) on tqpair(0x80e630): expected_datao=0, payload_size=4096 00:27:08.831 [2024-07-27 02:26:36.754400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754431] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.831 [2024-07-27 02:26:36.754441] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.754542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.754560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.754568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.754576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.754588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754658] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:08.832 [2024-07-27 02:26:36.754666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:08.832 [2024-07-27 02:26:36.754675] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:08.832 [2024-07-27 02:26:36.754694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.754703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.754714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.754725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.754748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.754754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.754763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.832 [2024-07-27 02:26:36.754789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.832 [2024-07-27 02:26:36.754816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d700, cid 5, qid 0 00:27:08.832 [2024-07-27 02:26:36.754974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.754989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.754996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.755014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.755023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.755030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d700) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.755053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d700, cid 5, qid 0 00:27:08.832 [2024-07-27 02:26:36.755248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.755263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.755270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d700) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.755294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d700, cid 5, qid 0 00:27:08.832 [2024-07-27 02:26:36.755493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.755505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.755512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d700) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.755535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d700, cid 5, qid 0 00:27:08.832 [2024-07-27 02:26:36.755716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.832 [2024-07-27 02:26:36.755731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.832 [2024-07-27 02:26:36.755738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d700) on tqpair=0x80e630 00:27:08.832 [2024-07-27 02:26:36.755770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.755886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x80e630) 00:27:08.832 [2024-07-27 02:26:36.755895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-27 02:26:36.755917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d700, cid 5, qid 0 00:27:08.832 [2024-07-27 02:26:36.755947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d580, cid 4, qid 0 00:27:08.832 [2024-07-27 02:26:36.755956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d880, cid 6, qid 0 00:27:08.832 [2024-07-27 02:26:36.755964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85da00, cid 7, qid 0 00:27:08.832 [2024-07-27 02:26:36.756199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.832 [2024-07-27 02:26:36.756213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.832 [2024-07-27 02:26:36.756220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756227] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=8192, cccid=5 00:27:08.832 [2024-07-27 02:26:36.756235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d700) on tqpair(0x80e630): expected_datao=0, payload_size=8192 00:27:08.832 [2024-07-27 02:26:36.756243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756269] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756279] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.832 [2024-07-27 02:26:36.756297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.832 [2024-07-27 02:26:36.756304] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=512, cccid=4 00:27:08.832 [2024-07-27 02:26:36.756319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d580) on tqpair(0x80e630): expected_datao=0, payload_size=512 00:27:08.832 [2024-07-27 02:26:36.756327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756344] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.832 [2024-07-27 02:26:36.756362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.832 [2024-07-27 02:26:36.756368] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.832 [2024-07-27 02:26:36.756376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=512, cccid=6 00:27:08.832 [2024-07-27 02:26:36.756383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85d880) on tqpair(0x80e630): expected_datao=0, payload_size=512 00:27:08.832 [2024-07-27 02:26:36.756391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756408] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:08.833 [2024-07-27 02:26:36.756426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:08.833 [2024-07-27 02:26:36.756433] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756439] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x80e630): datao=0, datal=4096, cccid=7 00:27:08.833 [2024-07-27 02:26:36.756447] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85da00) on tqpair(0x80e630): expected_datao=0, payload_size=4096 00:27:08.833 [2024-07-27 02:26:36.756455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756464] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756472] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.833 [2024-07-27 02:26:36.756493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.833 [2024-07-27 02:26:36.756500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d700) on tqpair=0x80e630 00:27:08.833 [2024-07-27 02:26:36.756530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.833 [2024-07-27 02:26:36.756540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.833 [2024-07-27 02:26:36.756547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d580) on tqpair=0x80e630 00:27:08.833 [2024-07-27 02:26:36.756570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.833 [2024-07-27 02:26:36.756580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.833 [2024-07-27 02:26:36.756587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d880) on tqpair=0x80e630 00:27:08.833 [2024-07-27 02:26:36.756605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.833 [2024-07-27 02:26:36.756614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.833 [2024-07-27 02:26:36.756621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.833 [2024-07-27 02:26:36.756628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85da00) on tqpair=0x80e630 00:27:08.833 ===================================================== 00:27:08.833 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:08.833 ===================================================== 00:27:08.833 Controller Capabilities/Features 00:27:08.833 ================================ 00:27:08.833 Vendor ID: 8086 00:27:08.833 Subsystem Vendor ID: 8086 00:27:08.833 Serial Number: SPDK00000000000001 00:27:08.833 Model Number: SPDK bdev Controller 00:27:08.833 Firmware Version: 24.09 00:27:08.833 Recommended Arb Burst: 6 00:27:08.833 IEEE OUI Identifier: e4 d2 5c 00:27:08.833 Multi-path I/O 00:27:08.833 May have multiple subsystem ports: Yes 00:27:08.833 May have multiple controllers: Yes 00:27:08.833 Associated with SR-IOV VF: No 00:27:08.833 Max Data Transfer Size: 131072 00:27:08.833 Max Number of Namespaces: 32 00:27:08.833 Max Number of I/O Queues: 127 00:27:08.833 NVMe Specification Version (VS): 1.3 00:27:08.833 NVMe Specification Version (Identify): 1.3 00:27:08.833 Maximum Queue Entries: 128 00:27:08.833 Contiguous Queues Required: Yes 00:27:08.833 Arbitration Mechanisms Supported 00:27:08.833 Weighted Round Robin: Not Supported 00:27:08.833 Vendor Specific: Not Supported 00:27:08.833 Reset Timeout: 15000 ms 00:27:08.833 Doorbell Stride: 4 bytes 00:27:08.833 NVM Subsystem Reset: Not Supported 00:27:08.833 Command Sets Supported 00:27:08.833 NVM Command Set: Supported 00:27:08.833 Boot Partition: Not Supported 00:27:08.833 Memory Page Size Minimum: 4096 bytes 00:27:08.833 Memory Page Size Maximum: 4096 bytes 00:27:08.833 Persistent Memory Region: Not Supported 00:27:08.833 Optional Asynchronous Events Supported 00:27:08.833 Namespace Attribute Notices: Supported 00:27:08.833 Firmware Activation Notices: Not Supported 00:27:08.833 ANA Change Notices: Not Supported 00:27:08.833 PLE Aggregate Log Change Notices: Not Supported 00:27:08.833 LBA Status Info Alert Notices: Not Supported 00:27:08.833 EGE Aggregate Log Change Notices: Not Supported 00:27:08.833 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.833 Zone Descriptor Change Notices: Not Supported 00:27:08.833 Discovery Log Change Notices: Not Supported 00:27:08.833 Controller Attributes 00:27:08.833 128-bit Host Identifier: Supported 00:27:08.833 Non-Operational Permissive Mode: Not Supported 00:27:08.833 NVM Sets: Not Supported 00:27:08.833 Read Recovery Levels: Not Supported 00:27:08.833 Endurance Groups: Not Supported 00:27:08.833 Predictable Latency Mode: Not Supported 00:27:08.833 Traffic Based Keep ALive: Not Supported 00:27:08.833 Namespace Granularity: Not Supported 00:27:08.833 SQ Associations: Not Supported 00:27:08.833 UUID List: Not Supported 00:27:08.833 Multi-Domain Subsystem: Not Supported 00:27:08.833 Fixed Capacity Management: Not Supported 00:27:08.833 Variable Capacity Management: Not Supported 00:27:08.833 Delete Endurance Group: Not Supported 00:27:08.833 Delete NVM Set: Not Supported 00:27:08.833 Extended LBA Formats Supported: Not Supported 00:27:08.833 Flexible Data Placement Supported: Not Supported 00:27:08.833 00:27:08.833 Controller Memory Buffer Support 00:27:08.833 ================================ 00:27:08.833 Supported: No 00:27:08.833 00:27:08.833 Persistent Memory Region Support 00:27:08.833 ================================ 00:27:08.833 Supported: No 00:27:08.833 00:27:08.833 Admin Command Set Attributes 00:27:08.833 ============================ 00:27:08.833 Security Send/Receive: Not Supported 00:27:08.833 Format NVM: Not Supported 00:27:08.833 Firmware Activate/Download: Not Supported 00:27:08.833 Namespace Management: Not Supported 00:27:08.833 Device Self-Test: Not Supported 00:27:08.833 Directives: Not Supported 00:27:08.833 NVMe-MI: Not Supported 00:27:08.833 Virtualization Management: Not Supported 00:27:08.833 Doorbell Buffer Config: Not Supported 00:27:08.833 Get LBA Status Capability: Not Supported 00:27:08.833 Command & Feature Lockdown Capability: Not Supported 00:27:08.833 Abort Command Limit: 4 00:27:08.833 Async Event Request Limit: 4 00:27:08.833 Number of Firmware Slots: N/A 00:27:08.833 Firmware Slot 1 Read-Only: N/A 00:27:08.833 Firmware Activation Without Reset: N/A 00:27:08.833 Multiple Update Detection Support: N/A 00:27:08.833 Firmware Update Granularity: No Information Provided 00:27:08.833 Per-Namespace SMART Log: No 00:27:08.833 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.833 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:08.833 Command Effects Log Page: Supported 00:27:08.833 Get Log Page Extended Data: Supported 00:27:08.833 Telemetry Log Pages: Not Supported 00:27:08.833 Persistent Event Log Pages: Not Supported 00:27:08.833 Supported Log Pages Log Page: May Support 00:27:08.833 Commands Supported & Effects Log Page: Not Supported 00:27:08.833 Feature Identifiers & Effects Log Page:May Support 00:27:08.833 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.833 Data Area 4 for Telemetry Log: Not Supported 00:27:08.833 Error Log Page Entries Supported: 128 00:27:08.833 Keep Alive: Supported 00:27:08.833 Keep Alive Granularity: 10000 ms 00:27:08.833 00:27:08.833 NVM Command Set Attributes 00:27:08.833 ========================== 00:27:08.833 Submission Queue Entry Size 00:27:08.833 Max: 64 00:27:08.833 Min: 64 00:27:08.833 Completion Queue Entry Size 00:27:08.833 Max: 16 00:27:08.833 Min: 16 00:27:08.833 Number of Namespaces: 32 00:27:08.833 Compare Command: Supported 00:27:08.833 Write Uncorrectable Command: Not Supported 00:27:08.833 Dataset Management Command: Supported 00:27:08.833 Write Zeroes Command: Supported 00:27:08.833 Set Features Save Field: Not Supported 00:27:08.833 Reservations: Supported 00:27:08.833 Timestamp: Not Supported 00:27:08.833 Copy: Supported 00:27:08.833 Volatile Write Cache: Present 00:27:08.833 Atomic Write Unit (Normal): 1 00:27:08.833 Atomic Write Unit (PFail): 1 00:27:08.833 Atomic Compare & Write Unit: 1 00:27:08.833 Fused Compare & Write: Supported 00:27:08.833 Scatter-Gather List 00:27:08.833 SGL Command Set: Supported 00:27:08.833 SGL Keyed: Supported 00:27:08.833 SGL Bit Bucket Descriptor: Not Supported 00:27:08.833 SGL Metadata Pointer: Not Supported 00:27:08.834 Oversized SGL: Not Supported 00:27:08.834 SGL Metadata Address: Not Supported 00:27:08.834 SGL Offset: Supported 00:27:08.834 Transport SGL Data Block: Not Supported 00:27:08.834 Replay Protected Memory Block: Not Supported 00:27:08.834 00:27:08.834 Firmware Slot Information 00:27:08.834 ========================= 00:27:08.834 Active slot: 1 00:27:08.834 Slot 1 Firmware Revision: 24.09 00:27:08.834 00:27:08.834 00:27:08.834 Commands Supported and Effects 00:27:08.834 ============================== 00:27:08.834 Admin Commands 00:27:08.834 -------------- 00:27:08.834 Get Log Page (02h): Supported 00:27:08.834 Identify (06h): Supported 00:27:08.834 Abort (08h): Supported 00:27:08.834 Set Features (09h): Supported 00:27:08.834 Get Features (0Ah): Supported 00:27:08.834 Asynchronous Event Request (0Ch): Supported 00:27:08.834 Keep Alive (18h): Supported 00:27:08.834 I/O Commands 00:27:08.834 ------------ 00:27:08.834 Flush (00h): Supported LBA-Change 00:27:08.834 Write (01h): Supported LBA-Change 00:27:08.834 Read (02h): Supported 00:27:08.834 Compare (05h): Supported 00:27:08.834 Write Zeroes (08h): Supported LBA-Change 00:27:08.834 Dataset Management (09h): Supported LBA-Change 00:27:08.834 Copy (19h): Supported LBA-Change 00:27:08.834 00:27:08.834 Error Log 00:27:08.834 ========= 00:27:08.834 00:27:08.834 Arbitration 00:27:08.834 =========== 00:27:08.834 Arbitration Burst: 1 00:27:08.834 00:27:08.834 Power Management 00:27:08.834 ================ 00:27:08.834 Number of Power States: 1 00:27:08.834 Current Power State: Power State #0 00:27:08.834 Power State #0: 00:27:08.834 Max Power: 0.00 W 00:27:08.834 Non-Operational State: Operational 00:27:08.834 Entry Latency: Not Reported 00:27:08.834 Exit Latency: Not Reported 00:27:08.834 Relative Read Throughput: 0 00:27:08.834 Relative Read Latency: 0 00:27:08.834 Relative Write Throughput: 0 00:27:08.834 Relative Write Latency: 0 00:27:08.834 Idle Power: Not Reported 00:27:08.834 Active Power: Not Reported 00:27:08.834 Non-Operational Permissive Mode: Not Supported 00:27:08.834 00:27:08.834 Health Information 00:27:08.834 ================== 00:27:08.834 Critical Warnings: 00:27:08.834 Available Spare Space: OK 00:27:08.834 Temperature: OK 00:27:08.834 Device Reliability: OK 00:27:08.834 Read Only: No 00:27:08.834 Volatile Memory Backup: OK 00:27:08.834 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:08.834 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:08.834 Available Spare: 0% 00:27:08.834 Available Spare Threshold: 0% 00:27:08.834 Life Percentage Used:[2024-07-27 02:26:36.756759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.756771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.756782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.756804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85da00, cid 7, qid 0 00:27:08.834 [2024-07-27 02:26:36.756980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.756995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.757002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85da00) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757053] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:08.834 [2024-07-27 02:26:36.757083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85cf80) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.834 [2024-07-27 02:26:36.757103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d100) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.834 [2024-07-27 02:26:36.757120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d280) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.834 [2024-07-27 02:26:36.757136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.834 [2024-07-27 02:26:36.757156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.757182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.757205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.834 [2024-07-27 02:26:36.757349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.757364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.757372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.757416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.757443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.834 [2024-07-27 02:26:36.757592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.757607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.757614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757629] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:08.834 [2024-07-27 02:26:36.757637] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:08.834 [2024-07-27 02:26:36.757654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.757680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.757702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.834 [2024-07-27 02:26:36.757846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.757861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.757868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.757892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.757908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.757918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.757940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.834 [2024-07-27 02:26:36.762070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.762087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.762095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.762102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.834 [2024-07-27 02:26:36.762134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.762144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.762151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x80e630) 00:27:08.834 [2024-07-27 02:26:36.762162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.834 [2024-07-27 02:26:36.762190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85d400, cid 3, qid 0 00:27:08.834 [2024-07-27 02:26:36.762337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:08.834 [2024-07-27 02:26:36.762349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:08.834 [2024-07-27 02:26:36.762356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:08.834 [2024-07-27 02:26:36.762363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x85d400) on tqpair=0x80e630 00:27:08.835 [2024-07-27 02:26:36.762376] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:08.835 0% 00:27:08.835 Data Units Read: 0 00:27:08.835 Data Units Written: 0 00:27:08.835 Host Read Commands: 0 00:27:08.835 Host Write Commands: 0 00:27:08.835 Controller Busy Time: 0 minutes 00:27:08.835 Power Cycles: 0 00:27:08.835 Power On Hours: 0 hours 00:27:08.835 Unsafe Shutdowns: 0 00:27:08.835 Unrecoverable Media Errors: 0 00:27:08.835 Lifetime Error Log Entries: 0 00:27:08.835 Warning Temperature Time: 0 minutes 00:27:08.835 Critical Temperature Time: 0 minutes 00:27:08.835 00:27:08.835 Number of Queues 00:27:08.835 ================ 00:27:08.835 Number of I/O Submission Queues: 127 00:27:08.835 Number of I/O Completion Queues: 127 00:27:08.835 00:27:08.835 Active Namespaces 00:27:08.835 ================= 00:27:08.835 Namespace ID:1 00:27:08.835 Error Recovery Timeout: Unlimited 00:27:08.835 Command Set Identifier: NVM (00h) 00:27:08.835 Deallocate: Supported 00:27:08.835 Deallocated/Unwritten Error: Not Supported 00:27:08.835 Deallocated Read Value: Unknown 00:27:08.835 Deallocate in Write Zeroes: Not Supported 00:27:08.835 Deallocated Guard Field: 0xFFFF 00:27:08.835 Flush: Supported 00:27:08.835 Reservation: Supported 00:27:08.835 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.835 Size (in LBAs): 131072 (0GiB) 00:27:08.835 Capacity (in LBAs): 131072 (0GiB) 00:27:08.835 Utilization (in LBAs): 131072 (0GiB) 00:27:08.835 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:08.835 EUI64: ABCDEF0123456789 00:27:08.835 UUID: bdd001cc-2f4e-4be5-b692-fc16675eb042 00:27:08.835 Thin Provisioning: Not Supported 00:27:08.835 Per-NS Atomic Units: Yes 00:27:08.835 Atomic Boundary Size (Normal): 0 00:27:08.835 Atomic Boundary Size (PFail): 0 00:27:08.835 Atomic Boundary Offset: 0 00:27:08.835 Maximum Single Source Range Length: 65535 00:27:08.835 Maximum Copy Length: 65535 00:27:08.835 Maximum Source Range Count: 1 00:27:08.835 NGUID/EUI64 Never Reused: No 00:27:08.835 Namespace Write Protected: No 00:27:08.835 Number of LBA Formats: 1 00:27:08.835 Current LBA Format: LBA Format #00 00:27:08.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.835 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.835 rmmod nvme_tcp 00:27:08.835 rmmod nvme_fabrics 00:27:08.835 rmmod nvme_keyring 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1123121 ']' 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1123121 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1123121 ']' 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1123121 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1123121 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1123121' 00:27:08.835 killing process with pid 1123121 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1123121 00:27:08.835 02:26:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1123121 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.095 02:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.636 00:27:11.636 real 0m5.299s 00:27:11.636 user 0m4.494s 00:27:11.636 sys 0m1.786s 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:11.636 ************************************ 00:27:11.636 END TEST nvmf_identify 00:27:11.636 ************************************ 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.636 ************************************ 00:27:11.636 START TEST nvmf_perf 00:27:11.636 ************************************ 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:11.636 * Looking for test storage... 00:27:11.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:11.636 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.637 02:26:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.540 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:27:13.541 00:27:13.541 --- 10.0.0.2 ping statistics --- 00:27:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.541 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:13.541 00:27:13.541 --- 10.0.0.1 ping statistics --- 00:27:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.541 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1125215 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1125215 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1125215 ']' 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.541 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:13.541 [2024-07-27 02:26:41.471721] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:27:13.541 [2024-07-27 02:26:41.471794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.541 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.541 [2024-07-27 02:26:41.509814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.541 [2024-07-27 02:26:41.542005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.541 [2024-07-27 02:26:41.632268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.541 [2024-07-27 02:26:41.632323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.541 [2024-07-27 02:26:41.632361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.541 [2024-07-27 02:26:41.632372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.541 [2024-07-27 02:26:41.632382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.541 [2024-07-27 02:26:41.632516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.541 [2024-07-27 02:26:41.632582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.541 [2024-07-27 02:26:41.632654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.541 [2024-07-27 02:26:41.632656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:13.799 02:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:17.078 02:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:17.078 02:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:17.078 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:17.079 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:17.335 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:17.335 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:17.335 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:17.336 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:17.336 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:17.592 [2024-07-27 02:26:45.664825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.592 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.849 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:17.849 02:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.106 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:18.106 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:18.363 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.620 [2024-07-27 02:26:46.668455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.620 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.879 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:18.879 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:18.879 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:18.879 02:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:20.258 Initializing NVMe Controllers 00:27:20.258 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:20.258 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:20.258 Initialization complete. Launching workers. 00:27:20.258 ======================================================== 00:27:20.258 Latency(us) 00:27:20.258 Device Information : IOPS MiB/s Average min max 00:27:20.258 PCIE (0000:88:00.0) NSID 1 from core 0: 85151.83 332.62 375.29 43.34 6246.54 00:27:20.258 ======================================================== 00:27:20.258 Total : 85151.83 332.62 375.29 43.34 6246.54 00:27:20.258 00:27:20.258 02:26:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.258 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.634 Initializing NVMe Controllers 00:27:21.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:21.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:21.634 Initialization complete. Launching workers. 00:27:21.634 ======================================================== 00:27:21.634 Latency(us) 00:27:21.634 Device Information : IOPS MiB/s Average min max 00:27:21.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.72 0.32 12501.19 207.70 45880.60 00:27:21.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 42.85 0.17 23962.71 7935.84 50890.93 00:27:21.634 ======================================================== 00:27:21.634 Total : 123.57 0.48 16475.75 207.70 50890.93 00:27:21.634 00:27:21.634 02:26:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.634 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.013 Initializing NVMe Controllers 00:27:23.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:23.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:23.013 Initialization complete. Launching workers. 00:27:23.013 ======================================================== 00:27:23.013 Latency(us) 00:27:23.013 Device Information : IOPS MiB/s Average min max 00:27:23.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7206.89 28.15 4441.72 886.67 8848.32 00:27:23.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.11 14.91 8465.49 5376.54 47695.68 00:27:23.013 ======================================================== 00:27:23.013 Total : 11024.01 43.06 5834.97 886.67 47695.68 00:27:23.013 00:27:23.013 02:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:23.013 02:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:23.013 02:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.013 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.548 Initializing NVMe Controllers 00:27:25.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.548 Controller IO queue size 128, less than required. 00:27:25.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:25.548 Controller IO queue size 128, less than required. 00:27:25.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:25.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:25.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:25.548 Initialization complete. Launching workers. 00:27:25.548 ======================================================== 00:27:25.548 Latency(us) 00:27:25.548 Device Information : IOPS MiB/s Average min max 00:27:25.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 781.50 195.38 169718.18 115062.02 213566.86 00:27:25.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.50 146.12 227371.89 85739.11 349984.35 00:27:25.548 ======================================================== 00:27:25.548 Total : 1366.00 341.50 194387.72 85739.11 349984.35 00:27:25.548 00:27:25.548 02:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:25.548 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.548 No valid NVMe controllers or AIO or URING devices found 00:27:25.548 Initializing NVMe Controllers 00:27:25.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:25.548 Controller IO queue size 128, less than required. 00:27:25.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:25.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:25.548 Controller IO queue size 128, less than required. 00:27:25.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:25.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:25.548 WARNING: Some requested NVMe devices were skipped 00:27:25.548 02:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:25.548 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.891 Initializing NVMe Controllers 00:27:28.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.891 Controller IO queue size 128, less than required. 00:27:28.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:28.891 Controller IO queue size 128, less than required. 00:27:28.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:28.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:28.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:28.891 Initialization complete. Launching workers. 00:27:28.891 00:27:28.891 ==================== 00:27:28.891 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:28.891 TCP transport: 00:27:28.891 polls: 34130 00:27:28.891 idle_polls: 11019 00:27:28.891 sock_completions: 23111 00:27:28.891 nvme_completions: 3799 00:27:28.891 submitted_requests: 5678 00:27:28.891 queued_requests: 1 00:27:28.891 00:27:28.891 ==================== 00:27:28.891 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:28.891 TCP transport: 00:27:28.891 polls: 36317 00:27:28.891 idle_polls: 12707 00:27:28.891 sock_completions: 23610 00:27:28.891 nvme_completions: 3747 00:27:28.891 submitted_requests: 5564 00:27:28.891 queued_requests: 1 00:27:28.891 ======================================================== 00:27:28.891 Latency(us) 00:27:28.891 Device Information : IOPS MiB/s Average min max 00:27:28.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.43 237.36 139275.86 75118.07 224115.72 00:27:28.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 936.43 234.11 138801.02 55879.30 183034.62 00:27:28.891 ======================================================== 00:27:28.891 Total : 1885.86 471.46 139040.07 55879.30 224115.72 00:27:28.891 00:27:28.891 02:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:28.891 02:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.891 02:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:28.891 02:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:28.891 02:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c2565e4d-ce17-4dfd-8815-6d9dad4da975 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c2565e4d-ce17-4dfd-8815-6d9dad4da975 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c2565e4d-ce17-4dfd-8815-6d9dad4da975 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:32.179 02:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:32.179 { 00:27:32.179 "uuid": "c2565e4d-ce17-4dfd-8815-6d9dad4da975", 00:27:32.179 "name": "lvs_0", 00:27:32.179 "base_bdev": "Nvme0n1", 00:27:32.179 "total_data_clusters": 238234, 00:27:32.179 "free_clusters": 238234, 00:27:32.179 "block_size": 512, 00:27:32.179 "cluster_size": 4194304 00:27:32.179 } 00:27:32.179 ]' 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c2565e4d-ce17-4dfd-8815-6d9dad4da975") .free_clusters' 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c2565e4d-ce17-4dfd-8815-6d9dad4da975") .cluster_size' 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:32.179 952936 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:32.179 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c2565e4d-ce17-4dfd-8815-6d9dad4da975 lbd_0 20480 00:27:32.745 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=be42c39f-3a08-4a83-adda-f2f89f5d943a 00:27:32.745 02:27:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore be42c39f-3a08-4a83-adda-f2f89f5d943a lvs_n_0 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=6725e151-afc3-4c38-a506-a37823ccd83f 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 6725e151-afc3-4c38-a506-a37823ccd83f 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6725e151-afc3-4c38-a506-a37823ccd83f 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:33.677 { 00:27:33.677 "uuid": "c2565e4d-ce17-4dfd-8815-6d9dad4da975", 00:27:33.677 "name": "lvs_0", 00:27:33.677 "base_bdev": "Nvme0n1", 00:27:33.677 "total_data_clusters": 238234, 00:27:33.677 "free_clusters": 233114, 00:27:33.677 "block_size": 512, 00:27:33.677 "cluster_size": 4194304 00:27:33.677 }, 00:27:33.677 { 00:27:33.677 "uuid": "6725e151-afc3-4c38-a506-a37823ccd83f", 00:27:33.677 "name": "lvs_n_0", 00:27:33.677 "base_bdev": "be42c39f-3a08-4a83-adda-f2f89f5d943a", 00:27:33.677 "total_data_clusters": 5114, 00:27:33.677 "free_clusters": 5114, 00:27:33.677 "block_size": 512, 00:27:33.677 "cluster_size": 4194304 00:27:33.677 } 00:27:33.677 ]' 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6725e151-afc3-4c38-a506-a37823ccd83f") .free_clusters' 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6725e151-afc3-4c38-a506-a37823ccd83f") .cluster_size' 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:33.677 20456 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:33.677 02:27:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6725e151-afc3-4c38-a506-a37823ccd83f lbd_nest_0 20456 00:27:33.935 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=83824dda-d1a0-4811-8d37-f4842f0a39be 00:27:33.935 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.193 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:34.193 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 83824dda-d1a0-4811-8d37-f4842f0a39be 00:27:34.450 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.707 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:34.707 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:34.707 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:34.707 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:34.707 02:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:34.965 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.168 Initializing NVMe Controllers 00:27:47.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:47.168 Initialization complete. Launching workers. 00:27:47.168 ======================================================== 00:27:47.168 Latency(us) 00:27:47.168 Device Information : IOPS MiB/s Average min max 00:27:47.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.49 0.02 21508.89 237.51 46067.18 00:27:47.168 ======================================================== 00:27:47.168 Total : 46.49 0.02 21508.89 237.51 46067.18 00:27:47.168 00:27:47.168 02:27:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:47.168 02:27:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:47.168 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.140 Initializing NVMe Controllers 00:27:57.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:57.140 Initialization complete. Launching workers. 00:27:57.140 ======================================================== 00:27:57.140 Latency(us) 00:27:57.140 Device Information : IOPS MiB/s Average min max 00:27:57.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.50 9.69 12912.31 5375.11 47897.59 00:27:57.140 ======================================================== 00:27:57.140 Total : 77.50 9.69 12912.31 5375.11 47897.59 00:27:57.140 00:27:57.140 02:27:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:57.140 02:27:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:57.140 02:27:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:57.140 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.140 Initializing NVMe Controllers 00:28:07.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.140 Initialization complete. Launching workers. 00:28:07.140 ======================================================== 00:28:07.140 Latency(us) 00:28:07.141 Device Information : IOPS MiB/s Average min max 00:28:07.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6909.20 3.37 4631.37 320.45 12075.53 00:28:07.141 ======================================================== 00:28:07.141 Total : 6909.20 3.37 4631.37 320.45 12075.53 00:28:07.141 00:28:07.141 02:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:07.141 02:27:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.141 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.110 Initializing NVMe Controllers 00:28:17.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.110 Initialization complete. Launching workers. 00:28:17.110 ======================================================== 00:28:17.110 Latency(us) 00:28:17.110 Device Information : IOPS MiB/s Average min max 00:28:17.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.10 219.14 18263.26 1631.68 38791.41 00:28:17.110 ======================================================== 00:28:17.110 Total : 1753.10 219.14 18263.26 1631.68 38791.41 00:28:17.110 00:28:17.110 02:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:17.110 02:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:17.110 02:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:17.110 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.085 Initializing NVMe Controllers 00:28:27.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.085 Controller IO queue size 128, less than required. 00:28:27.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.085 Initialization complete. Launching workers. 00:28:27.085 ======================================================== 00:28:27.085 Latency(us) 00:28:27.085 Device Information : IOPS MiB/s Average min max 00:28:27.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11463.12 5.60 11170.43 1581.44 25012.21 00:28:27.085 ======================================================== 00:28:27.085 Total : 11463.12 5.60 11170.43 1581.44 25012.21 00:28:27.085 00:28:27.085 02:27:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:27.085 02:27:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.085 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.052 Initializing NVMe Controllers 00:28:37.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.052 Controller IO queue size 128, less than required. 00:28:37.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.052 Initialization complete. Launching workers. 00:28:37.052 ======================================================== 00:28:37.052 Latency(us) 00:28:37.052 Device Information : IOPS MiB/s Average min max 00:28:37.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1228.12 153.52 104409.59 31728.98 183291.97 00:28:37.052 ======================================================== 00:28:37.052 Total : 1228.12 153.52 104409.59 31728.98 183291.97 00:28:37.052 00:28:37.052 02:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.310 02:28:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 83824dda-d1a0-4811-8d37-f4842f0a39be 00:28:38.243 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:38.243 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be42c39f-3a08-4a83-adda-f2f89f5d943a 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.809 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:38.809 rmmod nvme_tcp 00:28:38.809 rmmod nvme_fabrics 00:28:38.809 rmmod nvme_keyring 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1125215 ']' 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1125215 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1125215 ']' 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1125215 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.067 02:28:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125215 00:28:39.067 02:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:39.067 02:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:39.067 02:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125215' 00:28:39.067 killing process with pid 1125215 00:28:39.067 02:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1125215 00:28:39.067 02:28:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1125215 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.968 02:28:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.905 00:28:42.905 real 1m31.447s 00:28:42.905 user 5m39.004s 00:28:42.905 sys 0m15.241s 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:42.905 ************************************ 00:28:42.905 END TEST nvmf_perf 00:28:42.905 ************************************ 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.905 ************************************ 00:28:42.905 START TEST nvmf_fio_host 00:28:42.905 ************************************ 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:42.905 * Looking for test storage... 00:28:42.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.905 02:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:44.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:44.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:44.808 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:44.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:44.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:44.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:28:44.809 00:28:44.809 --- 10.0.0.2 ping statistics --- 00:28:44.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.809 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:28:44.809 00:28:44.809 --- 10.0.0.1 ping statistics --- 00:28:44.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.809 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1137924 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1137924 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1137924 ']' 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.809 02:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.809 [2024-07-27 02:28:12.841399] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:28:44.809 [2024-07-27 02:28:12.841482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.809 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.809 [2024-07-27 02:28:12.879484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:44.809 [2024-07-27 02:28:12.906442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.068 [2024-07-27 02:28:12.993444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.068 [2024-07-27 02:28:12.993500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.068 [2024-07-27 02:28:12.993514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.068 [2024-07-27 02:28:12.993525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.068 [2024-07-27 02:28:12.993535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.068 [2024-07-27 02:28:12.993600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.068 [2024-07-27 02:28:12.993657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.068 [2024-07-27 02:28:12.993727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.068 [2024-07-27 02:28:12.993729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.068 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.068 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:45.068 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:45.325 [2024-07-27 02:28:13.352202] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.325 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:45.325 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:45.325 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.325 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:45.586 Malloc1 00:28:45.586 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.844 02:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:46.102 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.360 [2024-07-27 02:28:14.435599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.360 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:46.618 02:28:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:46.876 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:46.876 fio-3.35 00:28:46.876 Starting 1 thread 00:28:46.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.405 00:28:49.405 test: (groupid=0, jobs=1): err= 0: pid=1138279: Sat Jul 27 02:28:17 2024 00:28:49.405 read: IOPS=8814, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec) 00:28:49.405 slat (nsec): min=1967, max=147804, avg=2625.37, stdev=1792.09 00:28:49.405 clat (usec): min=3480, max=14711, avg=8030.31, stdev=597.29 00:28:49.405 lat (usec): min=3509, max=14713, avg=8032.93, stdev=597.19 00:28:49.405 clat percentiles (usec): 00:28:49.405 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7570], 00:28:49.405 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:28:49.405 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:28:49.405 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[12518], 99.95th=[14222], 00:28:49.405 | 99.99th=[14746] 00:28:49.405 bw ( KiB/s): min=34328, max=35880, per=99.96%, avg=35246.00, stdev=663.29, samples=4 00:28:49.405 iops : min= 8582, max= 8970, avg=8811.50, stdev=165.82, samples=4 00:28:49.405 write: IOPS=8827, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2007msec); 0 zone resets 00:28:49.405 slat (usec): min=2, max=127, avg= 2.73, stdev= 1.33 00:28:49.405 clat (usec): min=1522, max=12254, avg=6429.44, stdev=516.91 00:28:49.405 lat (usec): min=1531, max=12257, avg=6432.17, stdev=516.87 00:28:49.405 clat percentiles (usec): 00:28:49.405 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:28:49.405 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:28:49.405 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 6980], 95.00th=[ 7177], 00:28:49.405 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[10290], 99.95th=[11469], 00:28:49.405 | 99.99th=[12256] 00:28:49.405 bw ( KiB/s): min=35056, max=35776, per=100.00%, avg=35316.00, stdev=317.49, samples=4 00:28:49.405 iops : min= 8764, max= 8944, avg=8829.00, stdev=79.37, samples=4 00:28:49.405 lat (msec) : 2=0.01%, 4=0.08%, 10=99.76%, 20=0.15% 00:28:49.405 cpu : usr=53.79%, sys=38.68%, ctx=82, majf=0, minf=40 00:28:49.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:49.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:49.405 issued rwts: total=17691,17717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:49.405 00:28:49.405 Run status group 0 (all jobs): 00:28:49.405 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:28:49.405 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:49.405 02:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:49.405 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:49.405 fio-3.35 00:28:49.405 Starting 1 thread 00:28:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.937 00:28:51.937 test: (groupid=0, jobs=1): err= 0: pid=1138613: Sat Jul 27 02:28:19 2024 00:28:51.937 read: IOPS=8037, BW=126MiB/s (132MB/s)(252MiB/2008msec) 00:28:51.937 slat (usec): min=2, max=117, avg= 3.74, stdev= 1.97 00:28:51.937 clat (usec): min=3176, max=19648, avg=9600.11, stdev=2554.74 00:28:51.937 lat (usec): min=3179, max=19657, avg=9603.85, stdev=2555.00 00:28:51.937 clat percentiles (usec): 00:28:51.937 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 7373], 00:28:51.937 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:28:51.937 | 70.00th=[10945], 80.00th=[11863], 90.00th=[12911], 95.00th=[13698], 00:28:51.937 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18744], 99.95th=[19268], 00:28:51.937 | 99.99th=[19530] 00:28:51.937 bw ( KiB/s): min=59168, max=71840, per=50.81%, avg=65344.00, stdev=5228.06, samples=4 00:28:51.937 iops : min= 3698, max= 4490, avg=4084.00, stdev=326.75, samples=4 00:28:51.937 write: IOPS=4686, BW=73.2MiB/s (76.8MB/s)(134MiB/1831msec); 0 zone resets 00:28:51.937 slat (usec): min=30, max=315, avg=34.11, stdev= 8.73 00:28:51.937 clat (usec): min=6210, max=20877, avg=11056.44, stdev=2026.93 00:28:51.937 lat (usec): min=6257, max=20955, avg=11090.55, stdev=2029.94 00:28:51.937 clat percentiles (usec): 00:28:51.937 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9241], 00:28:51.938 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:28:51.938 | 70.00th=[11863], 80.00th=[12780], 90.00th=[13960], 95.00th=[14615], 00:28:51.938 | 99.00th=[16581], 99.50th=[17957], 99.90th=[20317], 99.95th=[20579], 00:28:51.938 | 99.99th=[20841] 00:28:51.938 bw ( KiB/s): min=61696, max=75552, per=90.86%, avg=68128.00, stdev=5915.47, samples=4 00:28:51.938 iops : min= 3856, max= 4722, avg=4258.00, stdev=369.72, samples=4 00:28:51.938 lat (msec) : 4=0.11%, 10=50.85%, 20=48.97%, 50=0.07% 00:28:51.938 cpu : usr=73.90%, sys=21.26%, ctx=30, majf=0, minf=58 00:28:51.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:51.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:51.938 issued rwts: total=16140,8581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:51.938 00:28:51.938 Run status group 0 (all jobs): 00:28:51.938 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2008-2008msec 00:28:51.938 WRITE: bw=73.2MiB/s (76.8MB/s), 73.2MiB/s-73.2MiB/s (76.8MB/s-76.8MB/s), io=134MiB (141MB), run=1831-1831msec 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:51.938 02:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:51.938 02:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:51.938 02:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:28:51.939 02:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:28:55.215 Nvme0n1 00:28:55.215 02:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=51a22d7a-d389-423c-ae53-204d68848c57 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 51a22d7a-d389-423c-ae53-204d68848c57 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=51a22d7a-d389-423c-ae53-204d68848c57 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:58.492 02:28:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:58.492 { 00:28:58.492 "uuid": "51a22d7a-d389-423c-ae53-204d68848c57", 00:28:58.492 "name": "lvs_0", 00:28:58.492 "base_bdev": "Nvme0n1", 00:28:58.492 "total_data_clusters": 930, 00:28:58.492 "free_clusters": 930, 00:28:58.492 "block_size": 512, 00:28:58.492 "cluster_size": 1073741824 00:28:58.492 } 00:28:58.492 ]' 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="51a22d7a-d389-423c-ae53-204d68848c57") .free_clusters' 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="51a22d7a-d389-423c-ae53-204d68848c57") .cluster_size' 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:28:58.492 952320 00:28:58.492 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:58.750 f216a5b0-461f-431f-b2c0-195d5e282dca 00:28:58.750 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:59.007 02:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:59.265 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:59.523 02:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.781 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:59.781 fio-3.35 00:28:59.781 Starting 1 thread 00:28:59.781 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.308 00:29:02.308 test: (groupid=0, jobs=1): err= 0: pid=1139894: Sat Jul 27 02:28:30 2024 00:29:02.308 read: IOPS=6003, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2007msec) 00:29:02.308 slat (nsec): min=1937, max=140519, avg=2713.43, stdev=2308.79 00:29:02.308 clat (usec): min=877, max=171443, avg=11788.68, stdev=11644.23 00:29:02.308 lat (usec): min=880, max=171492, avg=11791.40, stdev=11644.49 00:29:02.308 clat percentiles (msec): 00:29:02.308 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:02.308 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:29:02.308 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:02.308 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:02.308 | 99.99th=[ 171] 00:29:02.308 bw ( KiB/s): min=16846, max=26408, per=99.70%, avg=23943.50, stdev=4733.72, samples=4 00:29:02.308 iops : min= 4211, max= 6602, avg=5985.75, stdev=1183.68, samples=4 00:29:02.308 write: IOPS=5985, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2007msec); 0 zone resets 00:29:02.308 slat (nsec): min=2061, max=99390, avg=2801.15, stdev=1922.61 00:29:02.308 clat (usec): min=412, max=169760, avg=9430.84, stdev=10956.92 00:29:02.308 lat (usec): min=414, max=169765, avg=9433.64, stdev=10957.17 00:29:02.308 clat percentiles (msec): 00:29:02.308 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:02.308 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:02.308 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:02.309 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:29:02.309 | 99.99th=[ 169] 00:29:02.309 bw ( KiB/s): min=17860, max=26048, per=99.87%, avg=23909.00, stdev=4033.60, samples=4 00:29:02.309 iops : min= 4465, max= 6512, avg=5977.25, stdev=1008.40, samples=4 00:29:02.309 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:02.309 lat (msec) : 2=0.03%, 4=0.13%, 10=54.66%, 20=44.62%, 250=0.53% 00:29:02.309 cpu : usr=53.79%, sys=41.23%, ctx=113, majf=0, minf=40 00:29:02.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:02.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:02.309 issued rwts: total=12050,12012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:02.309 00:29:02.309 Run status group 0 (all jobs): 00:29:02.309 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2007-2007msec 00:29:02.309 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2007-2007msec 00:29:02.309 02:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:02.309 02:28:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=5af51483-8959-4f05-ae23-31129ef5dc7e 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 5af51483-8959-4f05-ae23-31129ef5dc7e 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5af51483-8959-4f05-ae23-31129ef5dc7e 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:03.746 { 00:29:03.746 "uuid": "51a22d7a-d389-423c-ae53-204d68848c57", 00:29:03.746 "name": "lvs_0", 00:29:03.746 "base_bdev": "Nvme0n1", 00:29:03.746 "total_data_clusters": 930, 00:29:03.746 "free_clusters": 0, 00:29:03.746 "block_size": 512, 00:29:03.746 "cluster_size": 1073741824 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "uuid": "5af51483-8959-4f05-ae23-31129ef5dc7e", 00:29:03.746 "name": "lvs_n_0", 00:29:03.746 "base_bdev": "f216a5b0-461f-431f-b2c0-195d5e282dca", 00:29:03.746 "total_data_clusters": 237847, 00:29:03.746 "free_clusters": 237847, 00:29:03.746 "block_size": 512, 00:29:03.746 "cluster_size": 4194304 00:29:03.746 } 00:29:03.746 ]' 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5af51483-8959-4f05-ae23-31129ef5dc7e") .free_clusters' 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5af51483-8959-4f05-ae23-31129ef5dc7e") .cluster_size' 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:03.746 951388 00:29:03.746 02:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:04.679 8254a710-df7f-444f-8db6-b2e0d92557cc 00:29:04.679 02:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:04.679 02:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:04.937 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:05.196 02:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:05.459 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:05.459 fio-3.35 00:29:05.459 Starting 1 thread 00:29:05.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.985 00:29:07.985 test: (groupid=0, jobs=1): err= 0: pid=1140628: Sat Jul 27 02:28:35 2024 00:29:07.985 read: IOPS=5847, BW=22.8MiB/s (23.9MB/s)(45.9MiB/2008msec) 00:29:07.985 slat (nsec): min=1960, max=179738, avg=2595.08, stdev=2568.62 00:29:07.985 clat (usec): min=4652, max=19879, avg=12109.27, stdev=1005.65 00:29:07.985 lat (usec): min=4657, max=19882, avg=12111.87, stdev=1005.48 00:29:07.985 clat percentiles (usec): 00:29:07.985 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:29:07.985 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:07.985 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:29:07.985 | 99.00th=[14353], 99.50th=[14746], 99.90th=[18744], 99.95th=[19006], 00:29:07.985 | 99.99th=[19792] 00:29:07.985 bw ( KiB/s): min=21848, max=24104, per=99.80%, avg=23342.00, stdev=1013.60, samples=4 00:29:07.985 iops : min= 5462, max= 6026, avg=5835.50, stdev=253.40, samples=4 00:29:07.985 write: IOPS=5834, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2008msec); 0 zone resets 00:29:07.985 slat (usec): min=2, max=105, avg= 2.71, stdev= 1.60 00:29:07.985 clat (usec): min=2300, max=17421, avg=9607.03, stdev=885.07 00:29:07.985 lat (usec): min=2308, max=17424, avg=9609.74, stdev=885.00 00:29:07.985 clat percentiles (usec): 00:29:07.985 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:07.985 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:29:07.985 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:29:07.985 | 99.00th=[11469], 99.50th=[11863], 99.90th=[15533], 99.95th=[16712], 00:29:07.985 | 99.99th=[17433] 00:29:07.985 bw ( KiB/s): min=22936, max=23552, per=99.91%, avg=23318.00, stdev=266.88, samples=4 00:29:07.985 iops : min= 5734, max= 5888, avg=5829.50, stdev=66.72, samples=4 00:29:07.985 lat (msec) : 4=0.05%, 10=35.36%, 20=64.59% 00:29:07.985 cpu : usr=55.75%, sys=39.51%, ctx=90, majf=0, minf=40 00:29:07.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:07.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:07.985 issued rwts: total=11741,11716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:07.985 00:29:07.985 Run status group 0 (all jobs): 00:29:07.985 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.9MiB (48.1MB), run=2008-2008msec 00:29:07.985 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2008-2008msec 00:29:07.985 02:28:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:07.985 02:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:07.985 02:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:12.171 02:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:12.171 02:28:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:15.460 02:28:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:15.460 02:28:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.362 rmmod nvme_tcp 00:29:17.362 rmmod nvme_fabrics 00:29:17.362 rmmod nvme_keyring 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1137924 ']' 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1137924 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1137924 ']' 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1137924 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1137924 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1137924' 00:29:17.362 killing process with pid 1137924 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1137924 00:29:17.362 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1137924 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.621 02:28:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.523 02:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.782 00:29:19.782 real 0m36.978s 00:29:19.782 user 2m21.546s 00:29:19.782 sys 0m7.158s 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.782 ************************************ 00:29:19.782 END TEST nvmf_fio_host 00:29:19.782 ************************************ 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.782 ************************************ 00:29:19.782 START TEST nvmf_failover 00:29:19.782 ************************************ 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:19.782 * Looking for test storage... 00:29:19.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:19.782 02:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:21.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:21.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:21.684 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:21.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:21.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:21.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:29:21.685 00:29:21.685 --- 10.0.0.2 ping statistics --- 00:29:21.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.685 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:21.685 00:29:21.685 --- 10.0.0.1 ping statistics --- 00:29:21.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.685 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1143883 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1143883 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1143883 ']' 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.685 02:28:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.944 [2024-07-27 02:28:49.856129] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:29:21.944 [2024-07-27 02:28:49.856207] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.944 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.944 [2024-07-27 02:28:49.896909] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:21.944 [2024-07-27 02:28:49.925901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:21.944 [2024-07-27 02:28:50.017838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.944 [2024-07-27 02:28:50.017889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.944 [2024-07-27 02:28:50.017913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.944 [2024-07-27 02:28:50.017925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.944 [2024-07-27 02:28:50.017937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.944 [2024-07-27 02:28:50.018028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.944 [2024-07-27 02:28:50.018087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.944 [2024-07-27 02:28:50.018091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.202 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:22.459 [2024-07-27 02:28:50.384468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.459 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:22.717 Malloc0 00:29:22.717 02:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.974 02:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.232 02:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.489 [2024-07-27 02:28:51.524016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.489 02:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:23.747 [2024-07-27 02:28:51.820890] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:23.747 02:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:24.030 [2024-07-27 02:28:52.122023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:24.030 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1144249 00:29:24.030 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:24.030 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1144249 /var/tmp/bdevperf.sock 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1144249 ']' 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.031 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:24.302 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.302 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:24.302 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.868 NVMe0n1 00:29:24.868 02:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.126 00:29:25.126 02:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1144297 00:29:25.126 02:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.126 02:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:26.057 02:28:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.315 [2024-07-27 02:28:54.366275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 [2024-07-27 02:28:54.366568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114a480 is same with the state(5) to be set 00:29:26.315 02:28:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:29.594 02:28:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:29.851 00:29:29.851 02:28:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:30.109 [2024-07-27 02:28:58.061052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114b250 is same with the state(5) to be set 00:29:30.109 02:28:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:33.390 02:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.390 [2024-07-27 02:29:01.332988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.390 02:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:34.322 02:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:34.580 02:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1144297 00:29:41.141 0 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1144249 ']' 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1144249' 00:29:41.141 killing process with pid 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1144249 00:29:41.141 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:41.141 [2024-07-27 02:28:52.187358] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:29:41.141 [2024-07-27 02:28:52.187484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144249 ] 00:29:41.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.141 [2024-07-27 02:28:52.221445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:41.141 [2024-07-27 02:28:52.251207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.141 [2024-07-27 02:28:52.343814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.141 Running I/O for 15 seconds... 00:29:41.141 [2024-07-27 02:28:54.368450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.141 [2024-07-27 02:28:54.368920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.368948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.368977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.368991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.141 [2024-07-27 02:28:54.369305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.141 [2024-07-27 02:28:54.369320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.369981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.369995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.142 [2024-07-27 02:28:54.370441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.142 [2024-07-27 02:28:54.370472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.370947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.370960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.370971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.370988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.143 [2024-07-27 02:28:54.371601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.143 [2024-07-27 02:28:54.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:29:41.143 [2024-07-27 02:28:54.371634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.143 [2024-07-27 02:28:54.371647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.371953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.371966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.371978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.371991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.144 [2024-07-27 02:28:54.372731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:29:41.144 [2024-07-27 02:28:54.372744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.144 [2024-07-27 02:28:54.372757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.144 [2024-07-27 02:28:54.372768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.372780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.372805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.372817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.372828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.372842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.372854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.372866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.372877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.372893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.372907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.372918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.372930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.372942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.372956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.372967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.372978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.372991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.145 [2024-07-27 02:28:54.373701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.145 [2024-07-27 02:28:54.373715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.145 [2024-07-27 02:28:54.373727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:29:41.145 [2024-07-27 02:28:54.373739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.373752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.146 [2024-07-27 02:28:54.373764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.146 [2024-07-27 02:28:54.373775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:29:41.146 [2024-07-27 02:28:54.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.373802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.146 [2024-07-27 02:28:54.373812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.146 [2024-07-27 02:28:54.373824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:29:41.146 [2024-07-27 02:28:54.373836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.373849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.146 [2024-07-27 02:28:54.373860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.146 [2024-07-27 02:28:54.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80088 len:8 PRP1 0x0 PRP2 0x0 00:29:41.146 [2024-07-27 02:28:54.373883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.373901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.146 [2024-07-27 02:28:54.373913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.146 [2024-07-27 02:28:54.373925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80096 len:8 PRP1 0x0 PRP2 0x0 00:29:41.146 [2024-07-27 02:28:54.373937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.373992] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe56f80 was disconnected and freed. reset controller. 00:29:41.146 [2024-07-27 02:28:54.374010] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:41.146 [2024-07-27 02:28:54.374077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.146 [2024-07-27 02:28:54.374105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.374122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.146 [2024-07-27 02:28:54.374135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.374149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.146 [2024-07-27 02:28:54.374163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.374177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.146 [2024-07-27 02:28:54.374190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:54.374208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.146 [2024-07-27 02:28:54.374272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe63850 (9): Bad file descriptor 00:29:41.146 [2024-07-27 02:28:54.377626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.146 [2024-07-27 02:28:54.451393] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:41.146 [2024-07-27 02:28:58.063038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.146 [2024-07-27 02:28:58.063300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.146 [2024-07-27 02:28:58.063654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.146 [2024-07-27 02:28:58.063669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.063983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.063996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.147 [2024-07-27 02:28:58.064740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.147 [2024-07-27 02:28:58.064754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.064984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.064999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.148 [2024-07-27 02:28:58.065729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.148 [2024-07-27 02:28:58.065776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:29:41.148 [2024-07-27 02:28:58.065789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.148 [2024-07-27 02:28:58.065818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.148 [2024-07-27 02:28:58.065830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:29:41.148 [2024-07-27 02:28:58.065847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.148 [2024-07-27 02:28:58.065861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.148 [2024-07-27 02:28:58.065873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.148 [2024-07-27 02:28:58.065884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.065897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.065910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.065921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.065932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.065945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.065958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.065969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.065980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.065993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83640 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83648 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83656 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83664 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83672 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83680 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83688 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.066955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.149 [2024-07-27 02:28:58.066970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.149 [2024-07-27 02:28:58.066982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:29:41.149 [2024-07-27 02:28:58.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.149 [2024-07-27 02:28:58.067009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83720 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83744 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83752 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83760 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83776 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83784 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83800 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83808 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83816 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83824 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.150 [2024-07-27 02:28:58.067892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.150 [2024-07-27 02:28:58.067903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:29:41.150 [2024-07-27 02:28:58.067916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.067971] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe87670 was disconnected and freed. reset controller. 00:29:41.150 [2024-07-27 02:28:58.067990] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:41.150 [2024-07-27 02:28:58.068039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.150 [2024-07-27 02:28:58.068084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.068103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.150 [2024-07-27 02:28:58.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.150 [2024-07-27 02:28:58.068138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.150 [2024-07-27 02:28:58.068151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:28:58.068165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.151 [2024-07-27 02:28:58.068178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:28:58.068192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.151 [2024-07-27 02:28:58.068233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe63850 (9): Bad file descriptor 00:29:41.151 [2024-07-27 02:28:58.071533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.151 [2024-07-27 02:28:58.146227] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:41.151 [2024-07-27 02:29:02.599989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.151 [2024-07-27 02:29:02.600067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.151 [2024-07-27 02:29:02.600102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.151 [2024-07-27 02:29:02.600153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.151 [2024-07-27 02:29:02.600182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe63850 is same with the state(5) to be set 00:29:41.151 [2024-07-27 02:29:02.600779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.151 [2024-07-27 02:29:02.600802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.151 [2024-07-27 02:29:02.600845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.600970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.600986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.600999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.151 [2024-07-27 02:29:02.601702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.151 [2024-07-27 02:29:02.601716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.601965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.601989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.152 [2024-07-27 02:29:02.602671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.152 [2024-07-27 02:29:02.602716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.152 [2024-07-27 02:29:02.602730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.153 [2024-07-27 02:29:02.602758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.153 [2024-07-27 02:29:02.602786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.153 [2024-07-27 02:29:02.602831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.153 [2024-07-27 02:29:02.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.153 [2024-07-27 02:29:02.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.602919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.602947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.602976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.602991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.153 [2024-07-27 02:29:02.603864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.153 [2024-07-27 02:29:02.603877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.603907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.603922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.603944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.603959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.603973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.603988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.154 [2024-07-27 02:29:02.604691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.154 [2024-07-27 02:29:02.604900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.604915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe87330 is same with the state(5) to be set 00:29:41.154 [2024-07-27 02:29:02.604931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:41.154 [2024-07-27 02:29:02.604943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:41.154 [2024-07-27 02:29:02.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:29:41.154 [2024-07-27 02:29:02.604968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.154 [2024-07-27 02:29:02.605025] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe87330 was disconnected and freed. reset controller. 00:29:41.154 [2024-07-27 02:29:02.605055] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:41.154 [2024-07-27 02:29:02.605094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.154 [2024-07-27 02:29:02.608427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.154 [2024-07-27 02:29:02.608466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe63850 (9): Bad file descriptor 00:29:41.154 [2024-07-27 02:29:02.679894] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:41.154 00:29:41.155 Latency(us) 00:29:41.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.155 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:41.155 Verification LBA range: start 0x0 length 0x4000 00:29:41.155 NVMe0n1 : 15.01 8033.06 31.38 567.03 0.00 14855.59 1074.06 18155.90 00:29:41.155 =================================================================================================================== 00:29:41.155 Total : 8033.06 31.38 567.03 0.00 14855.59 1074.06 18155.90 00:29:41.155 Received shutdown signal, test time was about 15.000000 seconds 00:29:41.155 00:29:41.155 Latency(us) 00:29:41.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.155 =================================================================================================================== 00:29:41.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1146133 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1146133 /var/tmp/bdevperf.sock 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1146133 ']' 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:41.155 02:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:41.155 [2024-07-27 02:29:09.046555] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:41.155 02:29:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:41.415 [2024-07-27 02:29:09.307251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:41.415 02:29:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.673 NVMe0n1 00:29:41.673 02:29:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.240 00:29:42.240 02:29:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.498 00:29:42.498 02:29:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.498 02:29:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:42.756 02:29:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:43.013 02:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:46.327 02:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:46.327 02:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:46.327 02:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1146802 00:29:46.327 02:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:46.327 02:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1146802 00:29:47.702 0 00:29:47.702 02:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:47.702 [2024-07-27 02:29:08.578678] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:29:47.702 [2024-07-27 02:29:08.578775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146133 ] 00:29:47.702 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.702 [2024-07-27 02:29:08.611009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:47.702 [2024-07-27 02:29:08.640035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.702 [2024-07-27 02:29:08.723071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.702 [2024-07-27 02:29:11.081190] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:47.702 [2024-07-27 02:29:11.081291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.702 [2024-07-27 02:29:11.081314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.702 [2024-07-27 02:29:11.081331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.702 [2024-07-27 02:29:11.081345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.702 [2024-07-27 02:29:11.081360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.702 [2024-07-27 02:29:11.081375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.702 [2024-07-27 02:29:11.081390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:47.702 [2024-07-27 02:29:11.081405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.702 [2024-07-27 02:29:11.081419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.702 [2024-07-27 02:29:11.081464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.702 [2024-07-27 02:29:11.081496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129e850 (9): Bad file descriptor 00:29:47.702 [2024-07-27 02:29:11.088569] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.702 Running I/O for 1 seconds... 00:29:47.702 00:29:47.702 Latency(us) 00:29:47.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.702 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:47.702 Verification LBA range: start 0x0 length 0x4000 00:29:47.703 NVMe0n1 : 1.01 8835.82 34.51 0.00 0.00 14430.25 3131.16 12718.84 00:29:47.703 =================================================================================================================== 00:29:47.703 Total : 8835.82 34.51 0.00 0.00 14430.25 3131.16 12718.84 00:29:47.703 02:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:47.703 02:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:47.703 02:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.960 02:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:47.960 02:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:48.218 02:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.476 02:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1146133 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1146133 ']' 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1146133 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146133 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146133' 00:29:51.767 killing process with pid 1146133 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1146133 00:29:51.767 02:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1146133 00:29:52.026 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:52.026 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:52.284 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.285 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:52.285 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.285 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.285 rmmod nvme_tcp 00:29:52.285 rmmod nvme_fabrics 00:29:52.285 rmmod nvme_keyring 00:29:52.285 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1143883 ']' 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1143883 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1143883 ']' 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1143883 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143883 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143883' 00:29:52.545 killing process with pid 1143883 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1143883 00:29:52.545 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1143883 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.804 02:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:54.710 00:29:54.710 real 0m35.046s 00:29:54.710 user 2m1.767s 00:29:54.710 sys 0m6.711s 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.710 ************************************ 00:29:54.710 END TEST nvmf_failover 00:29:54.710 ************************************ 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.710 ************************************ 00:29:54.710 START TEST nvmf_host_discovery 00:29:54.710 ************************************ 00:29:54.710 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:54.968 * Looking for test storage... 00:29:54.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.968 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:29:54.969 02:29:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:56.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:56.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:56.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:56.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.873 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.874 02:29:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:56.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:29:56.874 00:29:56.874 --- 10.0.0.2 ping statistics --- 00:29:56.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.874 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:29:56.874 00:29:56.874 --- 10.0.0.1 ping statistics --- 00:29:56.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.874 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:56.874 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1149479 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1149479 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1149479 ']' 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.134 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.134 [2024-07-27 02:29:25.088268] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:29:57.134 [2024-07-27 02:29:25.088364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.134 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.134 [2024-07-27 02:29:25.125508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:57.134 [2024-07-27 02:29:25.153276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.134 [2024-07-27 02:29:25.243876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.134 [2024-07-27 02:29:25.243951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.134 [2024-07-27 02:29:25.243964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.134 [2024-07-27 02:29:25.243976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.134 [2024-07-27 02:29:25.243986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.134 [2024-07-27 02:29:25.244012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 [2024-07-27 02:29:25.391378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 [2024-07-27 02:29:25.399623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 null0 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 null1 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1149539 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1149539 /tmp/host.sock 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1149539 ']' 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.393 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:57.393 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:57.394 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:57.394 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.394 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.394 [2024-07-27 02:29:25.476438] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:29:57.394 [2024-07-27 02:29:25.476532] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149539 ] 00:29:57.394 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.394 [2024-07-27 02:29:25.509324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:57.394 [2024-07-27 02:29:25.537779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.651 [2024-07-27 02:29:25.628877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.651 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:57.651 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.652 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.910 02:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 [2024-07-27 02:29:26.033297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:57.910 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:57.911 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:29:58.170 02:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:58.740 [2024-07-27 02:29:26.769742] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:58.740 [2024-07-27 02:29:26.769782] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:58.740 [2024-07-27 02:29:26.769809] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:58.740 [2024-07-27 02:29:26.858076] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:59.000 [2024-07-27 02:29:26.920728] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:59.000 [2024-07-27 02:29:26.920754] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:59.258 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.258 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:59.258 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.259 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.518 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.518 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:59.518 02:29:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:00.900 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.901 [2024-07-27 02:29:28.713284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.901 [2024-07-27 02:29:28.713907] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:00.901 [2024-07-27 02:29:28.713950] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.901 [2024-07-27 02:29:28.802280] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:00.901 02:29:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:01.160 [2024-07-27 02:29:29.069570] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:01.160 [2024-07-27 02:29:29.069605] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:01.160 [2024-07-27 02:29:29.069615] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:01.725 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.984 [2024-07-27 02:29:29.937336] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:01.984 [2024-07-27 02:29:29.937397] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:01.984 [2024-07-27 02:29:29.942897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.984 [2024-07-27 02:29:29.942936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.984 [2024-07-27 02:29:29.942963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.984 [2024-07-27 02:29:29.942990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.984 [2024-07-27 02:29:29.943022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.984 [2024-07-27 02:29:29.943036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.984 [2024-07-27 02:29:29.943050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.984 [2024-07-27 02:29:29.943072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.984 [2024-07-27 02:29:29.943088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.984 [2024-07-27 02:29:29.952905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.984 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.984 [2024-07-27 02:29:29.962949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.984 [2024-07-27 02:29:29.963276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.984 [2024-07-27 02:29:29.963307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.984 [2024-07-27 02:29:29.963325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.984 [2024-07-27 02:29:29.963365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.984 [2024-07-27 02:29:29.963402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.984 [2024-07-27 02:29:29.963422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.984 [2024-07-27 02:29:29.963440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.984 [2024-07-27 02:29:29.963463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.984 [2024-07-27 02:29:29.973038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.984 [2024-07-27 02:29:29.973316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.984 [2024-07-27 02:29:29.973359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.984 [2024-07-27 02:29:29.973378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.984 [2024-07-27 02:29:29.973403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.984 [2024-07-27 02:29:29.973446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.984 [2024-07-27 02:29:29.973466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:29.973482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:29.973503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 [2024-07-27 02:29:29.983135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.985 [2024-07-27 02:29:29.983354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.985 [2024-07-27 02:29:29.983384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.985 [2024-07-27 02:29:29.983400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.985 [2024-07-27 02:29:29.983422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.985 [2024-07-27 02:29:29.983442] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.985 [2024-07-27 02:29:29.983456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:29.983469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:29.983487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:01.985 02:29:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:01.985 [2024-07-27 02:29:29.993222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.985 [2024-07-27 02:29:29.993461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.985 [2024-07-27 02:29:29.993493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.985 [2024-07-27 02:29:29.993512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.985 [2024-07-27 02:29:29.993537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.985 [2024-07-27 02:29:29.993560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.985 [2024-07-27 02:29:29.993582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:29.993598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:29.993620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 [2024-07-27 02:29:30.003299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.985 [2024-07-27 02:29:30.003579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.985 [2024-07-27 02:29:30.003610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.985 [2024-07-27 02:29:30.003627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.985 [2024-07-27 02:29:30.003650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.985 [2024-07-27 02:29:30.003671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.985 [2024-07-27 02:29:30.003686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:30.003700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:30.003719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 [2024-07-27 02:29:30.013382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.985 [2024-07-27 02:29:30.013663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.985 [2024-07-27 02:29:30.013693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.985 [2024-07-27 02:29:30.013711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.985 [2024-07-27 02:29:30.013734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.985 [2024-07-27 02:29:30.013755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.985 [2024-07-27 02:29:30.013769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:30.013783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:30.013802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.985 [2024-07-27 02:29:30.023485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:01.985 [2024-07-27 02:29:30.023749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.985 [2024-07-27 02:29:30.023780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c66e0 with addr=10.0.0.2, port=4420 00:30:01.985 [2024-07-27 02:29:30.023799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c66e0 is same with the state(5) to be set 00:30:01.985 [2024-07-27 02:29:30.023824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c66e0 (9): Bad file descriptor 00:30:01.985 [2024-07-27 02:29:30.023846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.985 [2024-07-27 02:29:30.023862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.985 [2024-07-27 02:29:30.023878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.985 [2024-07-27 02:29:30.023898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.985 [2024-07-27 02:29:30.024308] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:01.985 [2024-07-27 02:29:30.024341] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:01.985 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:01.986 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:02.245 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.246 02:29:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.179 [2024-07-27 02:29:31.271127] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:03.179 [2024-07-27 02:29:31.271152] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:03.179 [2024-07-27 02:29:31.271173] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:03.437 [2024-07-27 02:29:31.357610] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:03.437 [2024-07-27 02:29:31.467194] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:03.437 [2024-07-27 02:29:31.467228] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.437 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.437 request: 00:30:03.437 { 00:30:03.437 "name": "nvme", 00:30:03.437 "trtype": "tcp", 00:30:03.437 "traddr": "10.0.0.2", 00:30:03.437 "adrfam": "ipv4", 00:30:03.437 "trsvcid": "8009", 00:30:03.437 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:03.438 "wait_for_attach": true, 00:30:03.438 "method": "bdev_nvme_start_discovery", 00:30:03.438 "req_id": 1 00:30:03.438 } 00:30:03.438 Got JSON-RPC error response 00:30:03.438 response: 00:30:03.438 { 00:30:03.438 "code": -17, 00:30:03.438 "message": "File exists" 00:30:03.438 } 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.438 request: 00:30:03.438 { 00:30:03.438 "name": "nvme_second", 00:30:03.438 "trtype": "tcp", 00:30:03.438 "traddr": "10.0.0.2", 00:30:03.438 "adrfam": "ipv4", 00:30:03.438 "trsvcid": "8009", 00:30:03.438 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:03.438 "wait_for_attach": true, 00:30:03.438 "method": "bdev_nvme_start_discovery", 00:30:03.438 "req_id": 1 00:30:03.438 } 00:30:03.438 Got JSON-RPC error response 00:30:03.438 response: 00:30:03.438 { 00:30:03.438 "code": -17, 00:30:03.438 "message": "File exists" 00:30:03.438 } 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:03.438 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.717 02:29:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:04.658 [2024-07-27 02:29:32.658604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.658 [2024-07-27 02:29:32.658656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2204210 with addr=10.0.0.2, port=8010 00:30:04.658 [2024-07-27 02:29:32.658684] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:04.658 [2024-07-27 02:29:32.658700] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:04.658 [2024-07-27 02:29:32.658724] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:05.605 [2024-07-27 02:29:33.661179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.605 [2024-07-27 02:29:33.661263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2204210 with addr=10.0.0.2, port=8010 00:30:05.605 [2024-07-27 02:29:33.661294] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.605 [2024-07-27 02:29:33.661317] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.605 [2024-07-27 02:29:33.661330] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:06.538 [2024-07-27 02:29:34.663230] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:06.538 request: 00:30:06.538 { 00:30:06.538 "name": "nvme_second", 00:30:06.538 "trtype": "tcp", 00:30:06.538 "traddr": "10.0.0.2", 00:30:06.539 "adrfam": "ipv4", 00:30:06.539 "trsvcid": "8010", 00:30:06.539 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:06.539 "wait_for_attach": false, 00:30:06.539 "attach_timeout_ms": 3000, 00:30:06.539 "method": "bdev_nvme_start_discovery", 00:30:06.539 "req_id": 1 00:30:06.539 } 00:30:06.539 Got JSON-RPC error response 00:30:06.539 response: 00:30:06.539 { 00:30:06.539 "code": -110, 00:30:06.539 "message": "Connection timed out" 00:30:06.539 } 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:06.539 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1149539 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.799 rmmod nvme_tcp 00:30:06.799 rmmod nvme_fabrics 00:30:06.799 rmmod nvme_keyring 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1149479 ']' 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1149479 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1149479 ']' 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1149479 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1149479 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1149479' 00:30:06.799 killing process with pid 1149479 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1149479 00:30:06.799 02:29:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1149479 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.058 02:29:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:08.961 00:30:08.961 real 0m14.251s 00:30:08.961 user 0m21.124s 00:30:08.961 sys 0m2.931s 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.961 ************************************ 00:30:08.961 END TEST nvmf_host_discovery 00:30:08.961 ************************************ 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.961 02:29:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.220 ************************************ 00:30:09.220 START TEST nvmf_host_multipath_status 00:30:09.220 ************************************ 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:09.220 * Looking for test storage... 00:30:09.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.220 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.221 02:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.123 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:11.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:30:11.124 00:30:11.124 --- 10.0.0.2 ping statistics --- 00:30:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.124 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:11.124 00:30:11.124 --- 10.0.0.1 ping statistics --- 00:30:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.124 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1152704 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1152704 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1152704 ']' 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:11.124 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:11.382 [2024-07-27 02:29:39.320492] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:30:11.382 [2024-07-27 02:29:39.320574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.382 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.382 [2024-07-27 02:29:39.359530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.382 [2024-07-27 02:29:39.391242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:11.382 [2024-07-27 02:29:39.482249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.382 [2024-07-27 02:29:39.482316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.382 [2024-07-27 02:29:39.482343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.382 [2024-07-27 02:29:39.482357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.382 [2024-07-27 02:29:39.482369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.382 [2024-07-27 02:29:39.482460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.382 [2024-07-27 02:29:39.482467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1152704 00:30:11.639 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.896 [2024-07-27 02:29:39.842687] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.896 02:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:12.153 Malloc0 00:30:12.153 02:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:12.411 02:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.668 02:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.924 [2024-07-27 02:29:40.883295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.924 02:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:13.182 [2024-07-27 02:29:41.148000] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1152987 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1152987 /var/tmp/bdevperf.sock 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1152987 ']' 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.182 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.440 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:13.440 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:13.698 02:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:13.955 Nvme0n1 00:30:13.955 02:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:14.521 Nvme0n1 00:30:14.521 02:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:14.521 02:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:17.051 02:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:17.051 02:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:17.051 02:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:17.309 02:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:18.240 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:18.240 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.240 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.240 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.498 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.498 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:18.498 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.498 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.756 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.756 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.756 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.756 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.013 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.013 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:19.013 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.013 02:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.271 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.271 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.271 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.271 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.530 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.530 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.530 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.530 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.788 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.788 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:19.788 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:20.046 02:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:20.306 02:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:21.245 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:21.245 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:21.245 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.245 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.503 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.503 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:21.503 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.503 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.761 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.761 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.761 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.761 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:22.026 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.026 02:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:22.026 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.026 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:22.283 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.283 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:22.283 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.283 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.540 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.540 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:22.540 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.540 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.798 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.798 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:22.798 02:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:23.055 02:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:23.314 02:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:24.247 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:24.247 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:24.247 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.247 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.505 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.505 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:24.505 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.505 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:24.762 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.762 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:24.762 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.762 02:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:25.020 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.020 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:25.020 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.020 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.282 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.282 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:25.282 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.282 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.540 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.540 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:25.540 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.540 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:25.796 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.796 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:25.796 02:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:26.052 02:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:26.311 02:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:27.246 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:27.246 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:27.246 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.246 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.504 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.504 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:27.504 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.504 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.762 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.762 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.762 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.762 02:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.019 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.019 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.019 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.019 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.277 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.277 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:28.277 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.277 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.535 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.535 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:28.535 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.535 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.793 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.793 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:28.793 02:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:29.051 02:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:29.310 02:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:30.245 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:30.245 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:30.245 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.245 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.503 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.503 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:30.503 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.503 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.762 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.762 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.762 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.762 02:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.019 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.019 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.019 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.019 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.277 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.277 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:31.277 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.277 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.536 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.536 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:31.536 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.536 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.794 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.794 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:31.794 02:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:32.052 02:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:32.310 02:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:33.244 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:33.244 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:33.244 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.244 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.502 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.502 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.502 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.502 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.760 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.760 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.760 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.760 02:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.018 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.018 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.018 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.018 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.275 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.275 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:34.275 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.275 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.534 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:34.534 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.534 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.792 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.792 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.792 02:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:35.079 02:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:35.079 02:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:35.335 02:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:35.593 02:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:36.967 02:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:36.967 02:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:36.967 02:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.967 02:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.967 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.967 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:36.967 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.967 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:37.225 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.225 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:37.225 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.225 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.483 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.483 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.483 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.483 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.742 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.742 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.742 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.742 02:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.999 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.999 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.999 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.999 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:38.256 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.256 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:38.256 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:38.512 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:38.770 02:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:39.706 02:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:39.706 02:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:39.706 02:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.706 02:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.964 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.964 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:39.964 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.965 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:40.222 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.223 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:40.223 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.223 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.481 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.481 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.481 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.481 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:41.047 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.047 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:41.047 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.047 02:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:41.047 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.047 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:41.047 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.047 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:41.305 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.305 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:41.305 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:41.568 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:41.827 02:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:43.205 02:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:43.205 02:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:43.206 02:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.206 02:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:43.206 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.206 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:43.206 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.206 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:43.464 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.464 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:43.464 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.464 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.722 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.722 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.722 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.722 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.981 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.981 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:43.981 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.981 02:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:44.238 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.238 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:44.238 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.238 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:44.497 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.497 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:44.497 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:44.755 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:45.012 02:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:45.945 02:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:45.945 02:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:45.945 02:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.945 02:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:46.203 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.203 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:46.203 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.203 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:46.459 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:46.459 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:46.459 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.459 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:46.716 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.716 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:46.716 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.716 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.974 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.974 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:46.974 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.974 02:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:47.232 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.232 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:47.232 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.232 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1152987 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1152987 ']' 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1152987 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1152987 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1152987' 00:30:47.492 killing process with pid 1152987 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1152987 00:30:47.492 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1152987 00:30:47.753 Connection closed with partial response: 00:30:47.753 00:30:47.753 00:30:47.753 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1152987 00:30:47.753 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:47.753 [2024-07-27 02:29:41.211096] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:30:47.753 [2024-07-27 02:29:41.211198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152987 ] 00:30:47.753 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.753 [2024-07-27 02:29:41.243747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:47.753 [2024-07-27 02:29:41.271745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.753 [2024-07-27 02:29:41.359640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.753 Running I/O for 90 seconds... 00:30:47.753 [2024-07-27 02:29:57.072081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.072880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.072983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.753 [2024-07-27 02:29:57.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:47.753 [2024-07-27 02:29:57.073464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.073978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.073999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.754 [2024-07-27 02:29:57.074827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.754 [2024-07-27 02:29:57.074863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:47.754 [2024-07-27 02:29:57.074884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.074899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.074921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.074936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.074958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.074973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.074994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.075009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.075067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.076818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.076843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.076876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.076895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.076930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.076947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.076977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.076993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:29:57.077301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:29:57.077333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:29:57.077364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.973970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.973985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.974021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.755 [2024-07-27 02:30:12.974483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.974526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.974562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.755 [2024-07-27 02:30:12.974599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:47.755 [2024-07-27 02:30:12.974620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.974672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.974693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.974709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.974730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.974745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.975562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.975780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.975998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.756 [2024-07-27 02:30:12.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.756 [2024-07-27 02:30:12.976874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:47.756 [2024-07-27 02:30:12.976912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.757 [2024-07-27 02:30:12.976928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.976950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.976965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.976986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.977022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.977469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.977519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.977577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:47.757 [2024-07-27 02:30:12.977617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:47.757 [2024-07-27 02:30:12.977633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:47.757 Received shutdown signal, test time was about 32.720594 seconds 00:30:47.757 00:30:47.757 Latency(us) 00:30:47.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.757 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:47.757 Verification LBA range: start 0x0 length 0x4000 00:30:47.757 Nvme0n1 : 32.72 7990.52 31.21 0.00 0.00 15992.70 1225.77 4026531.84 00:30:47.757 =================================================================================================================== 00:30:47.757 Total : 7990.52 31.21 0.00 0.00 15992.70 1225.77 4026531.84 00:30:47.757 02:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:48.015 rmmod nvme_tcp 00:30:48.015 rmmod nvme_fabrics 00:30:48.015 rmmod nvme_keyring 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1152704 ']' 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1152704 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1152704 ']' 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1152704 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1152704 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1152704' 00:30:48.015 killing process with pid 1152704 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1152704 00:30:48.015 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1152704 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.274 02:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.813 00:30:50.813 real 0m41.281s 00:30:50.813 user 2m4.595s 00:30:50.813 sys 0m10.718s 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:50.813 ************************************ 00:30:50.813 END TEST nvmf_host_multipath_status 00:30:50.813 ************************************ 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.813 ************************************ 00:30:50.813 START TEST nvmf_discovery_remove_ifc 00:30:50.813 ************************************ 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:50.813 * Looking for test storage... 00:30:50.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:30:50.813 02:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:52.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:52.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.725 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:52.726 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:52.726 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:52.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:30:52.726 00:30:52.726 --- 10.0.0.2 ping statistics --- 00:30:52.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.726 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:30:52.726 00:30:52.726 --- 10.0.0.1 ping statistics --- 00:30:52.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.726 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1159793 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1159793 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1159793 ']' 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.726 02:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.726 [2024-07-27 02:30:20.833874] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:30:52.726 [2024-07-27 02:30:20.833964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.726 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.726 [2024-07-27 02:30:20.872440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:52.984 [2024-07-27 02:30:20.904819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.984 [2024-07-27 02:30:20.993898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.984 [2024-07-27 02:30:20.993962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.984 [2024-07-27 02:30:20.993986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.984 [2024-07-27 02:30:20.993999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.984 [2024-07-27 02:30:20.994011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.984 [2024-07-27 02:30:20.994041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.984 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.984 [2024-07-27 02:30:21.140395] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.242 [2024-07-27 02:30:21.148588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:53.242 null0 00:30:53.242 [2024-07-27 02:30:21.180532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1159821 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1159821 /tmp/host.sock 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1159821 ']' 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:53.242 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.242 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.242 [2024-07-27 02:30:21.245935] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:30:53.242 [2024-07-27 02:30:21.245999] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159821 ] 00:30:53.242 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.242 [2024-07-27 02:30:21.277917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:53.242 [2024-07-27 02:30:21.307819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.242 [2024-07-27 02:30:21.398302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.500 02:30:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.877 [2024-07-27 02:30:22.609266] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:54.877 [2024-07-27 02:30:22.609292] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:54.877 [2024-07-27 02:30:22.609315] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:54.877 [2024-07-27 02:30:22.695608] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:54.877 [2024-07-27 02:30:22.879672] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:54.877 [2024-07-27 02:30:22.879744] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:54.877 [2024-07-27 02:30:22.879789] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:54.877 [2024-07-27 02:30:22.879817] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:54.877 [2024-07-27 02:30:22.879845] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.877 [2024-07-27 02:30:22.886956] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1df2370 was disconnected and freed. delete nvme_qpair. 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.877 02:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.877 02:30:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.877 02:30:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:56.250 02:30:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:57.188 02:30:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:58.125 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.126 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:58.126 02:30:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:59.061 02:30:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:00.440 02:30:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.440 [2024-07-27 02:30:28.321011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:00.440 [2024-07-27 02:30:28.321104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.440 [2024-07-27 02:30:28.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.440 [2024-07-27 02:30:28.321146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.440 [2024-07-27 02:30:28.321160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.440 [2024-07-27 02:30:28.321174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.440 [2024-07-27 02:30:28.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.440 [2024-07-27 02:30:28.321203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.440 [2024-07-27 02:30:28.321230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.440 [2024-07-27 02:30:28.321245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.440 [2024-07-27 02:30:28.321258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.440 [2024-07-27 02:30:28.321282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db8d70 is same with the state(5) to be set 00:31:00.440 [2024-07-27 02:30:28.331031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db8d70 (9): Bad file descriptor 00:31:00.440 [2024-07-27 02:30:28.341096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:01.377 [2024-07-27 02:30:29.389086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:01.377 [2024-07-27 02:30:29.389134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db8d70 with addr=10.0.0.2, port=4420 00:31:01.377 [2024-07-27 02:30:29.389156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db8d70 is same with the state(5) to be set 00:31:01.377 [2024-07-27 02:30:29.389192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db8d70 (9): Bad file descriptor 00:31:01.377 [2024-07-27 02:30:29.389609] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:01.377 [2024-07-27 02:30:29.389653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:01.377 [2024-07-27 02:30:29.389673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:01.377 [2024-07-27 02:30:29.389690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:01.377 [2024-07-27 02:30:29.389715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:01.377 [2024-07-27 02:30:29.389734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:01.377 02:30:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:02.313 [2024-07-27 02:30:30.392238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.313 [2024-07-27 02:30:30.392289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.313 [2024-07-27 02:30:30.392311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:02.313 [2024-07-27 02:30:30.392325] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:02.313 [2024-07-27 02:30:30.392371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.313 [2024-07-27 02:30:30.392427] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:02.313 [2024-07-27 02:30:30.392479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.313 [2024-07-27 02:30:30.392503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.313 [2024-07-27 02:30:30.392525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.313 [2024-07-27 02:30:30.392549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.313 [2024-07-27 02:30:30.392567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.314 [2024-07-27 02:30:30.392583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.314 [2024-07-27 02:30:30.392599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.314 [2024-07-27 02:30:30.392614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.314 [2024-07-27 02:30:30.392630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.314 [2024-07-27 02:30:30.392645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.314 [2024-07-27 02:30:30.392660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:02.314 [2024-07-27 02:30:30.392841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db8210 (9): Bad file descriptor 00:31:02.314 [2024-07-27 02:30:30.393858] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:02.314 [2024-07-27 02:30:30.393883] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.314 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:02.574 02:30:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:03.510 02:30:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:04.444 [2024-07-27 02:30:32.404531] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:04.444 [2024-07-27 02:30:32.404556] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:04.444 [2024-07-27 02:30:32.404580] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:04.444 [2024-07-27 02:30:32.490860] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:04.444 [2024-07-27 02:30:32.554761] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:04.444 [2024-07-27 02:30:32.554813] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:04.444 [2024-07-27 02:30:32.554850] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:04.444 [2024-07-27 02:30:32.554875] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:04.444 [2024-07-27 02:30:32.554890] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:04.444 [2024-07-27 02:30:32.563322] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dfb900 was disconnected and freed. delete nvme_qpair. 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:04.444 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1159821 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1159821 ']' 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1159821 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1159821 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1159821' 00:31:04.704 killing process with pid 1159821 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1159821 00:31:04.704 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1159821 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:04.964 rmmod nvme_tcp 00:31:04.964 rmmod nvme_fabrics 00:31:04.964 rmmod nvme_keyring 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1159793 ']' 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1159793 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1159793 ']' 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1159793 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1159793 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1159793' 00:31:04.964 killing process with pid 1159793 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1159793 00:31:04.964 02:30:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1159793 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.229 02:30:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:07.135 00:31:07.135 real 0m16.781s 00:31:07.135 user 0m23.670s 00:31:07.135 sys 0m2.993s 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.135 ************************************ 00:31:07.135 END TEST nvmf_discovery_remove_ifc 00:31:07.135 ************************************ 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.135 ************************************ 00:31:07.135 START TEST nvmf_identify_kernel_target 00:31:07.135 ************************************ 00:31:07.135 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:07.393 * Looking for test storage... 00:31:07.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.393 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:07.394 02:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:09.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:09.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:09.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:09.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:31:09.324 00:31:09.324 --- 10.0.0.2 ping statistics --- 00:31:09.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.324 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:09.324 00:31:09.324 --- 10.0.0.1 ping statistics --- 00:31:09.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.324 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:09.324 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:09.325 02:30:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:10.258 Waiting for block devices as requested 00:31:10.517 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:10.517 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:10.775 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:10.775 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:10.775 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:10.775 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:11.082 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:11.082 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:11.082 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:11.082 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:11.082 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:11.343 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:11.343 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:11.343 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:11.343 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:11.602 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:11.602 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:11.860 No valid GPT data, bailing 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:11.860 00:31:11.860 Discovery Log Number of Records 2, Generation counter 2 00:31:11.860 =====Discovery Log Entry 0====== 00:31:11.860 trtype: tcp 00:31:11.860 adrfam: ipv4 00:31:11.860 subtype: current discovery subsystem 00:31:11.860 treq: not specified, sq flow control disable supported 00:31:11.860 portid: 1 00:31:11.860 trsvcid: 4420 00:31:11.860 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:11.860 traddr: 10.0.0.1 00:31:11.860 eflags: none 00:31:11.860 sectype: none 00:31:11.860 =====Discovery Log Entry 1====== 00:31:11.860 trtype: tcp 00:31:11.860 adrfam: ipv4 00:31:11.860 subtype: nvme subsystem 00:31:11.860 treq: not specified, sq flow control disable supported 00:31:11.860 portid: 1 00:31:11.860 trsvcid: 4420 00:31:11.860 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:11.860 traddr: 10.0.0.1 00:31:11.860 eflags: none 00:31:11.860 sectype: none 00:31:11.860 02:30:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:11.860 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:11.860 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.860 ===================================================== 00:31:11.860 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:11.860 ===================================================== 00:31:11.860 Controller Capabilities/Features 00:31:11.860 ================================ 00:31:11.860 Vendor ID: 0000 00:31:11.860 Subsystem Vendor ID: 0000 00:31:11.860 Serial Number: a8f502c94c7cd090b11e 00:31:11.860 Model Number: Linux 00:31:11.860 Firmware Version: 6.7.0-68 00:31:11.860 Recommended Arb Burst: 0 00:31:11.860 IEEE OUI Identifier: 00 00 00 00:31:11.860 Multi-path I/O 00:31:11.860 May have multiple subsystem ports: No 00:31:11.860 May have multiple controllers: No 00:31:11.860 Associated with SR-IOV VF: No 00:31:11.860 Max Data Transfer Size: Unlimited 00:31:11.860 Max Number of Namespaces: 0 00:31:11.860 Max Number of I/O Queues: 1024 00:31:11.860 NVMe Specification Version (VS): 1.3 00:31:11.860 NVMe Specification Version (Identify): 1.3 00:31:11.860 Maximum Queue Entries: 1024 00:31:11.860 Contiguous Queues Required: No 00:31:11.860 Arbitration Mechanisms Supported 00:31:11.860 Weighted Round Robin: Not Supported 00:31:11.860 Vendor Specific: Not Supported 00:31:11.860 Reset Timeout: 7500 ms 00:31:11.860 Doorbell Stride: 4 bytes 00:31:11.860 NVM Subsystem Reset: Not Supported 00:31:11.860 Command Sets Supported 00:31:11.860 NVM Command Set: Supported 00:31:11.860 Boot Partition: Not Supported 00:31:11.860 Memory Page Size Minimum: 4096 bytes 00:31:11.860 Memory Page Size Maximum: 4096 bytes 00:31:11.860 Persistent Memory Region: Not Supported 00:31:11.860 Optional Asynchronous Events Supported 00:31:11.860 Namespace Attribute Notices: Not Supported 00:31:11.860 Firmware Activation Notices: Not Supported 00:31:11.860 ANA Change Notices: Not Supported 00:31:11.860 PLE Aggregate Log Change Notices: Not Supported 00:31:11.860 LBA Status Info Alert Notices: Not Supported 00:31:11.860 EGE Aggregate Log Change Notices: Not Supported 00:31:11.860 Normal NVM Subsystem Shutdown event: Not Supported 00:31:11.860 Zone Descriptor Change Notices: Not Supported 00:31:11.860 Discovery Log Change Notices: Supported 00:31:11.860 Controller Attributes 00:31:11.860 128-bit Host Identifier: Not Supported 00:31:11.860 Non-Operational Permissive Mode: Not Supported 00:31:11.860 NVM Sets: Not Supported 00:31:11.860 Read Recovery Levels: Not Supported 00:31:11.860 Endurance Groups: Not Supported 00:31:11.860 Predictable Latency Mode: Not Supported 00:31:11.860 Traffic Based Keep ALive: Not Supported 00:31:11.860 Namespace Granularity: Not Supported 00:31:11.860 SQ Associations: Not Supported 00:31:11.860 UUID List: Not Supported 00:31:11.860 Multi-Domain Subsystem: Not Supported 00:31:11.860 Fixed Capacity Management: Not Supported 00:31:11.860 Variable Capacity Management: Not Supported 00:31:11.860 Delete Endurance Group: Not Supported 00:31:11.860 Delete NVM Set: Not Supported 00:31:11.860 Extended LBA Formats Supported: Not Supported 00:31:11.860 Flexible Data Placement Supported: Not Supported 00:31:11.860 00:31:11.860 Controller Memory Buffer Support 00:31:11.860 ================================ 00:31:11.860 Supported: No 00:31:11.860 00:31:11.860 Persistent Memory Region Support 00:31:11.860 ================================ 00:31:11.860 Supported: No 00:31:11.860 00:31:11.860 Admin Command Set Attributes 00:31:11.860 ============================ 00:31:11.860 Security Send/Receive: Not Supported 00:31:11.860 Format NVM: Not Supported 00:31:11.860 Firmware Activate/Download: Not Supported 00:31:11.860 Namespace Management: Not Supported 00:31:11.860 Device Self-Test: Not Supported 00:31:11.860 Directives: Not Supported 00:31:11.860 NVMe-MI: Not Supported 00:31:11.860 Virtualization Management: Not Supported 00:31:11.860 Doorbell Buffer Config: Not Supported 00:31:11.860 Get LBA Status Capability: Not Supported 00:31:11.860 Command & Feature Lockdown Capability: Not Supported 00:31:11.860 Abort Command Limit: 1 00:31:11.860 Async Event Request Limit: 1 00:31:11.860 Number of Firmware Slots: N/A 00:31:11.860 Firmware Slot 1 Read-Only: N/A 00:31:12.118 Firmware Activation Without Reset: N/A 00:31:12.118 Multiple Update Detection Support: N/A 00:31:12.118 Firmware Update Granularity: No Information Provided 00:31:12.118 Per-Namespace SMART Log: No 00:31:12.118 Asymmetric Namespace Access Log Page: Not Supported 00:31:12.118 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:12.118 Command Effects Log Page: Not Supported 00:31:12.118 Get Log Page Extended Data: Supported 00:31:12.118 Telemetry Log Pages: Not Supported 00:31:12.118 Persistent Event Log Pages: Not Supported 00:31:12.118 Supported Log Pages Log Page: May Support 00:31:12.118 Commands Supported & Effects Log Page: Not Supported 00:31:12.118 Feature Identifiers & Effects Log Page:May Support 00:31:12.118 NVMe-MI Commands & Effects Log Page: May Support 00:31:12.118 Data Area 4 for Telemetry Log: Not Supported 00:31:12.118 Error Log Page Entries Supported: 1 00:31:12.118 Keep Alive: Not Supported 00:31:12.118 00:31:12.118 NVM Command Set Attributes 00:31:12.118 ========================== 00:31:12.118 Submission Queue Entry Size 00:31:12.118 Max: 1 00:31:12.118 Min: 1 00:31:12.118 Completion Queue Entry Size 00:31:12.118 Max: 1 00:31:12.118 Min: 1 00:31:12.118 Number of Namespaces: 0 00:31:12.118 Compare Command: Not Supported 00:31:12.118 Write Uncorrectable Command: Not Supported 00:31:12.118 Dataset Management Command: Not Supported 00:31:12.118 Write Zeroes Command: Not Supported 00:31:12.118 Set Features Save Field: Not Supported 00:31:12.118 Reservations: Not Supported 00:31:12.118 Timestamp: Not Supported 00:31:12.118 Copy: Not Supported 00:31:12.118 Volatile Write Cache: Not Present 00:31:12.118 Atomic Write Unit (Normal): 1 00:31:12.118 Atomic Write Unit (PFail): 1 00:31:12.118 Atomic Compare & Write Unit: 1 00:31:12.118 Fused Compare & Write: Not Supported 00:31:12.118 Scatter-Gather List 00:31:12.118 SGL Command Set: Supported 00:31:12.118 SGL Keyed: Not Supported 00:31:12.118 SGL Bit Bucket Descriptor: Not Supported 00:31:12.118 SGL Metadata Pointer: Not Supported 00:31:12.118 Oversized SGL: Not Supported 00:31:12.118 SGL Metadata Address: Not Supported 00:31:12.118 SGL Offset: Supported 00:31:12.118 Transport SGL Data Block: Not Supported 00:31:12.118 Replay Protected Memory Block: Not Supported 00:31:12.118 00:31:12.118 Firmware Slot Information 00:31:12.118 ========================= 00:31:12.118 Active slot: 0 00:31:12.118 00:31:12.118 00:31:12.118 Error Log 00:31:12.118 ========= 00:31:12.118 00:31:12.118 Active Namespaces 00:31:12.118 ================= 00:31:12.118 Discovery Log Page 00:31:12.118 ================== 00:31:12.118 Generation Counter: 2 00:31:12.118 Number of Records: 2 00:31:12.118 Record Format: 0 00:31:12.118 00:31:12.118 Discovery Log Entry 0 00:31:12.118 ---------------------- 00:31:12.118 Transport Type: 3 (TCP) 00:31:12.118 Address Family: 1 (IPv4) 00:31:12.118 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:12.118 Entry Flags: 00:31:12.118 Duplicate Returned Information: 0 00:31:12.118 Explicit Persistent Connection Support for Discovery: 0 00:31:12.118 Transport Requirements: 00:31:12.118 Secure Channel: Not Specified 00:31:12.118 Port ID: 1 (0x0001) 00:31:12.118 Controller ID: 65535 (0xffff) 00:31:12.118 Admin Max SQ Size: 32 00:31:12.118 Transport Service Identifier: 4420 00:31:12.118 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:12.118 Transport Address: 10.0.0.1 00:31:12.118 Discovery Log Entry 1 00:31:12.118 ---------------------- 00:31:12.118 Transport Type: 3 (TCP) 00:31:12.118 Address Family: 1 (IPv4) 00:31:12.118 Subsystem Type: 2 (NVM Subsystem) 00:31:12.118 Entry Flags: 00:31:12.118 Duplicate Returned Information: 0 00:31:12.118 Explicit Persistent Connection Support for Discovery: 0 00:31:12.118 Transport Requirements: 00:31:12.118 Secure Channel: Not Specified 00:31:12.118 Port ID: 1 (0x0001) 00:31:12.118 Controller ID: 65535 (0xffff) 00:31:12.118 Admin Max SQ Size: 32 00:31:12.118 Transport Service Identifier: 4420 00:31:12.118 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:12.118 Transport Address: 10.0.0.1 00:31:12.118 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:12.118 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.118 get_feature(0x01) failed 00:31:12.118 get_feature(0x02) failed 00:31:12.118 get_feature(0x04) failed 00:31:12.118 ===================================================== 00:31:12.118 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:12.118 ===================================================== 00:31:12.118 Controller Capabilities/Features 00:31:12.118 ================================ 00:31:12.118 Vendor ID: 0000 00:31:12.118 Subsystem Vendor ID: 0000 00:31:12.118 Serial Number: 04ae5e8717b1f498bad1 00:31:12.118 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:12.118 Firmware Version: 6.7.0-68 00:31:12.118 Recommended Arb Burst: 6 00:31:12.118 IEEE OUI Identifier: 00 00 00 00:31:12.118 Multi-path I/O 00:31:12.118 May have multiple subsystem ports: Yes 00:31:12.118 May have multiple controllers: Yes 00:31:12.118 Associated with SR-IOV VF: No 00:31:12.118 Max Data Transfer Size: Unlimited 00:31:12.118 Max Number of Namespaces: 1024 00:31:12.118 Max Number of I/O Queues: 128 00:31:12.118 NVMe Specification Version (VS): 1.3 00:31:12.118 NVMe Specification Version (Identify): 1.3 00:31:12.118 Maximum Queue Entries: 1024 00:31:12.118 Contiguous Queues Required: No 00:31:12.118 Arbitration Mechanisms Supported 00:31:12.118 Weighted Round Robin: Not Supported 00:31:12.118 Vendor Specific: Not Supported 00:31:12.118 Reset Timeout: 7500 ms 00:31:12.118 Doorbell Stride: 4 bytes 00:31:12.118 NVM Subsystem Reset: Not Supported 00:31:12.118 Command Sets Supported 00:31:12.118 NVM Command Set: Supported 00:31:12.118 Boot Partition: Not Supported 00:31:12.118 Memory Page Size Minimum: 4096 bytes 00:31:12.118 Memory Page Size Maximum: 4096 bytes 00:31:12.118 Persistent Memory Region: Not Supported 00:31:12.118 Optional Asynchronous Events Supported 00:31:12.118 Namespace Attribute Notices: Supported 00:31:12.118 Firmware Activation Notices: Not Supported 00:31:12.118 ANA Change Notices: Supported 00:31:12.118 PLE Aggregate Log Change Notices: Not Supported 00:31:12.118 LBA Status Info Alert Notices: Not Supported 00:31:12.118 EGE Aggregate Log Change Notices: Not Supported 00:31:12.118 Normal NVM Subsystem Shutdown event: Not Supported 00:31:12.118 Zone Descriptor Change Notices: Not Supported 00:31:12.118 Discovery Log Change Notices: Not Supported 00:31:12.119 Controller Attributes 00:31:12.119 128-bit Host Identifier: Supported 00:31:12.119 Non-Operational Permissive Mode: Not Supported 00:31:12.119 NVM Sets: Not Supported 00:31:12.119 Read Recovery Levels: Not Supported 00:31:12.119 Endurance Groups: Not Supported 00:31:12.119 Predictable Latency Mode: Not Supported 00:31:12.119 Traffic Based Keep ALive: Supported 00:31:12.119 Namespace Granularity: Not Supported 00:31:12.119 SQ Associations: Not Supported 00:31:12.119 UUID List: Not Supported 00:31:12.119 Multi-Domain Subsystem: Not Supported 00:31:12.119 Fixed Capacity Management: Not Supported 00:31:12.119 Variable Capacity Management: Not Supported 00:31:12.119 Delete Endurance Group: Not Supported 00:31:12.119 Delete NVM Set: Not Supported 00:31:12.119 Extended LBA Formats Supported: Not Supported 00:31:12.119 Flexible Data Placement Supported: Not Supported 00:31:12.119 00:31:12.119 Controller Memory Buffer Support 00:31:12.119 ================================ 00:31:12.119 Supported: No 00:31:12.119 00:31:12.119 Persistent Memory Region Support 00:31:12.119 ================================ 00:31:12.119 Supported: No 00:31:12.119 00:31:12.119 Admin Command Set Attributes 00:31:12.119 ============================ 00:31:12.119 Security Send/Receive: Not Supported 00:31:12.119 Format NVM: Not Supported 00:31:12.119 Firmware Activate/Download: Not Supported 00:31:12.119 Namespace Management: Not Supported 00:31:12.119 Device Self-Test: Not Supported 00:31:12.119 Directives: Not Supported 00:31:12.119 NVMe-MI: Not Supported 00:31:12.119 Virtualization Management: Not Supported 00:31:12.119 Doorbell Buffer Config: Not Supported 00:31:12.119 Get LBA Status Capability: Not Supported 00:31:12.119 Command & Feature Lockdown Capability: Not Supported 00:31:12.119 Abort Command Limit: 4 00:31:12.119 Async Event Request Limit: 4 00:31:12.119 Number of Firmware Slots: N/A 00:31:12.119 Firmware Slot 1 Read-Only: N/A 00:31:12.119 Firmware Activation Without Reset: N/A 00:31:12.119 Multiple Update Detection Support: N/A 00:31:12.119 Firmware Update Granularity: No Information Provided 00:31:12.119 Per-Namespace SMART Log: Yes 00:31:12.119 Asymmetric Namespace Access Log Page: Supported 00:31:12.119 ANA Transition Time : 10 sec 00:31:12.119 00:31:12.119 Asymmetric Namespace Access Capabilities 00:31:12.119 ANA Optimized State : Supported 00:31:12.119 ANA Non-Optimized State : Supported 00:31:12.119 ANA Inaccessible State : Supported 00:31:12.119 ANA Persistent Loss State : Supported 00:31:12.119 ANA Change State : Supported 00:31:12.119 ANAGRPID is not changed : No 00:31:12.119 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:12.119 00:31:12.119 ANA Group Identifier Maximum : 128 00:31:12.119 Number of ANA Group Identifiers : 128 00:31:12.119 Max Number of Allowed Namespaces : 1024 00:31:12.119 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:12.119 Command Effects Log Page: Supported 00:31:12.119 Get Log Page Extended Data: Supported 00:31:12.119 Telemetry Log Pages: Not Supported 00:31:12.119 Persistent Event Log Pages: Not Supported 00:31:12.119 Supported Log Pages Log Page: May Support 00:31:12.119 Commands Supported & Effects Log Page: Not Supported 00:31:12.119 Feature Identifiers & Effects Log Page:May Support 00:31:12.119 NVMe-MI Commands & Effects Log Page: May Support 00:31:12.119 Data Area 4 for Telemetry Log: Not Supported 00:31:12.119 Error Log Page Entries Supported: 128 00:31:12.119 Keep Alive: Supported 00:31:12.119 Keep Alive Granularity: 1000 ms 00:31:12.119 00:31:12.119 NVM Command Set Attributes 00:31:12.119 ========================== 00:31:12.119 Submission Queue Entry Size 00:31:12.119 Max: 64 00:31:12.119 Min: 64 00:31:12.119 Completion Queue Entry Size 00:31:12.119 Max: 16 00:31:12.119 Min: 16 00:31:12.119 Number of Namespaces: 1024 00:31:12.119 Compare Command: Not Supported 00:31:12.119 Write Uncorrectable Command: Not Supported 00:31:12.119 Dataset Management Command: Supported 00:31:12.119 Write Zeroes Command: Supported 00:31:12.119 Set Features Save Field: Not Supported 00:31:12.119 Reservations: Not Supported 00:31:12.119 Timestamp: Not Supported 00:31:12.119 Copy: Not Supported 00:31:12.119 Volatile Write Cache: Present 00:31:12.119 Atomic Write Unit (Normal): 1 00:31:12.119 Atomic Write Unit (PFail): 1 00:31:12.119 Atomic Compare & Write Unit: 1 00:31:12.119 Fused Compare & Write: Not Supported 00:31:12.119 Scatter-Gather List 00:31:12.119 SGL Command Set: Supported 00:31:12.119 SGL Keyed: Not Supported 00:31:12.119 SGL Bit Bucket Descriptor: Not Supported 00:31:12.119 SGL Metadata Pointer: Not Supported 00:31:12.119 Oversized SGL: Not Supported 00:31:12.119 SGL Metadata Address: Not Supported 00:31:12.119 SGL Offset: Supported 00:31:12.119 Transport SGL Data Block: Not Supported 00:31:12.119 Replay Protected Memory Block: Not Supported 00:31:12.119 00:31:12.119 Firmware Slot Information 00:31:12.119 ========================= 00:31:12.119 Active slot: 0 00:31:12.119 00:31:12.119 Asymmetric Namespace Access 00:31:12.119 =========================== 00:31:12.119 Change Count : 0 00:31:12.119 Number of ANA Group Descriptors : 1 00:31:12.119 ANA Group Descriptor : 0 00:31:12.119 ANA Group ID : 1 00:31:12.119 Number of NSID Values : 1 00:31:12.119 Change Count : 0 00:31:12.119 ANA State : 1 00:31:12.119 Namespace Identifier : 1 00:31:12.119 00:31:12.119 Commands Supported and Effects 00:31:12.119 ============================== 00:31:12.119 Admin Commands 00:31:12.119 -------------- 00:31:12.119 Get Log Page (02h): Supported 00:31:12.119 Identify (06h): Supported 00:31:12.119 Abort (08h): Supported 00:31:12.119 Set Features (09h): Supported 00:31:12.119 Get Features (0Ah): Supported 00:31:12.119 Asynchronous Event Request (0Ch): Supported 00:31:12.119 Keep Alive (18h): Supported 00:31:12.119 I/O Commands 00:31:12.119 ------------ 00:31:12.119 Flush (00h): Supported 00:31:12.119 Write (01h): Supported LBA-Change 00:31:12.119 Read (02h): Supported 00:31:12.119 Write Zeroes (08h): Supported LBA-Change 00:31:12.119 Dataset Management (09h): Supported 00:31:12.119 00:31:12.119 Error Log 00:31:12.119 ========= 00:31:12.119 Entry: 0 00:31:12.119 Error Count: 0x3 00:31:12.119 Submission Queue Id: 0x0 00:31:12.119 Command Id: 0x5 00:31:12.119 Phase Bit: 0 00:31:12.119 Status Code: 0x2 00:31:12.119 Status Code Type: 0x0 00:31:12.119 Do Not Retry: 1 00:31:12.119 Error Location: 0x28 00:31:12.119 LBA: 0x0 00:31:12.119 Namespace: 0x0 00:31:12.119 Vendor Log Page: 0x0 00:31:12.119 ----------- 00:31:12.119 Entry: 1 00:31:12.119 Error Count: 0x2 00:31:12.119 Submission Queue Id: 0x0 00:31:12.119 Command Id: 0x5 00:31:12.119 Phase Bit: 0 00:31:12.119 Status Code: 0x2 00:31:12.119 Status Code Type: 0x0 00:31:12.119 Do Not Retry: 1 00:31:12.119 Error Location: 0x28 00:31:12.119 LBA: 0x0 00:31:12.119 Namespace: 0x0 00:31:12.119 Vendor Log Page: 0x0 00:31:12.119 ----------- 00:31:12.119 Entry: 2 00:31:12.119 Error Count: 0x1 00:31:12.119 Submission Queue Id: 0x0 00:31:12.119 Command Id: 0x4 00:31:12.119 Phase Bit: 0 00:31:12.119 Status Code: 0x2 00:31:12.119 Status Code Type: 0x0 00:31:12.119 Do Not Retry: 1 00:31:12.119 Error Location: 0x28 00:31:12.119 LBA: 0x0 00:31:12.119 Namespace: 0x0 00:31:12.119 Vendor Log Page: 0x0 00:31:12.119 00:31:12.119 Number of Queues 00:31:12.119 ================ 00:31:12.119 Number of I/O Submission Queues: 128 00:31:12.119 Number of I/O Completion Queues: 128 00:31:12.119 00:31:12.119 ZNS Specific Controller Data 00:31:12.119 ============================ 00:31:12.119 Zone Append Size Limit: 0 00:31:12.119 00:31:12.119 00:31:12.119 Active Namespaces 00:31:12.119 ================= 00:31:12.119 get_feature(0x05) failed 00:31:12.119 Namespace ID:1 00:31:12.119 Command Set Identifier: NVM (00h) 00:31:12.119 Deallocate: Supported 00:31:12.119 Deallocated/Unwritten Error: Not Supported 00:31:12.119 Deallocated Read Value: Unknown 00:31:12.119 Deallocate in Write Zeroes: Not Supported 00:31:12.119 Deallocated Guard Field: 0xFFFF 00:31:12.119 Flush: Supported 00:31:12.119 Reservation: Not Supported 00:31:12.119 Namespace Sharing Capabilities: Multiple Controllers 00:31:12.119 Size (in LBAs): 1953525168 (931GiB) 00:31:12.119 Capacity (in LBAs): 1953525168 (931GiB) 00:31:12.119 Utilization (in LBAs): 1953525168 (931GiB) 00:31:12.119 UUID: 6cd02234-66b9-46b8-bb33-7052f5cd59f4 00:31:12.119 Thin Provisioning: Not Supported 00:31:12.119 Per-NS Atomic Units: Yes 00:31:12.119 Atomic Boundary Size (Normal): 0 00:31:12.119 Atomic Boundary Size (PFail): 0 00:31:12.119 Atomic Boundary Offset: 0 00:31:12.119 NGUID/EUI64 Never Reused: No 00:31:12.119 ANA group ID: 1 00:31:12.119 Namespace Write Protected: No 00:31:12.119 Number of LBA Formats: 1 00:31:12.119 Current LBA Format: LBA Format #00 00:31:12.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:12.119 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:12.119 rmmod nvme_tcp 00:31:12.119 rmmod nvme_fabrics 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.119 02:30:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:14.654 02:30:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:15.230 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:15.230 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:15.230 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:15.230 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:15.230 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:15.230 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:15.230 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:15.495 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:15.495 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:15.495 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:16.430 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:16.430 00:31:16.430 real 0m9.153s 00:31:16.430 user 0m1.844s 00:31:16.430 sys 0m3.197s 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.430 ************************************ 00:31:16.430 END TEST nvmf_identify_kernel_target 00:31:16.430 ************************************ 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.430 ************************************ 00:31:16.430 START TEST nvmf_auth_host 00:31:16.430 ************************************ 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:16.430 * Looking for test storage... 00:31:16.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.430 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:16.431 02:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:18.329 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:18.330 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:18.330 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:18.330 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:18.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.330 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:18.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:31:18.588 00:31:18.588 --- 10.0.0.2 ping statistics --- 00:31:18.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.588 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:31:18.588 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:31:18.588 00:31:18.588 --- 10.0.0.1 ping statistics --- 00:31:18.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.589 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1166857 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1166857 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1166857 ']' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:18.589 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aec8be31c9f8ba05c9cf216706ff8537 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zL7 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aec8be31c9f8ba05c9cf216706ff8537 0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aec8be31c9f8ba05c9cf216706ff8537 0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aec8be31c9f8ba05c9cf216706ff8537 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zL7 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zL7 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zL7 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eede3c414507da7a5f2b84a27cf004fbea68b79417fdc799dedec200cbeda8f5 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DM4 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eede3c414507da7a5f2b84a27cf004fbea68b79417fdc799dedec200cbeda8f5 3 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eede3c414507da7a5f2b84a27cf004fbea68b79417fdc799dedec200cbeda8f5 3 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eede3c414507da7a5f2b84a27cf004fbea68b79417fdc799dedec200cbeda8f5 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DM4 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DM4 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DM4 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=373ca3107650caac193308be707e144f4823b7861a18b1f6 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eiC 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 373ca3107650caac193308be707e144f4823b7861a18b1f6 0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 373ca3107650caac193308be707e144f4823b7861a18b1f6 0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=373ca3107650caac193308be707e144f4823b7861a18b1f6 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eiC 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eiC 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eiC 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:18.847 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38aed73c161bd5b1362922093f08afc48bf966ff2229b783 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DmK 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38aed73c161bd5b1362922093f08afc48bf966ff2229b783 2 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38aed73c161bd5b1362922093f08afc48bf966ff2229b783 2 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38aed73c161bd5b1362922093f08afc48bf966ff2229b783 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:18.848 02:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DmK 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DmK 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DmK 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c5d3ac23e7b5df492ac5bb62ce908c2 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UCw 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c5d3ac23e7b5df492ac5bb62ce908c2 1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c5d3ac23e7b5df492ac5bb62ce908c2 1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c5d3ac23e7b5df492ac5bb62ce908c2 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UCw 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UCw 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UCw 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=779638387a0611710e8f81a184762923 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C2h 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 779638387a0611710e8f81a184762923 1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 779638387a0611710e8f81a184762923 1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=779638387a0611710e8f81a184762923 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C2h 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C2h 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.C2h 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=565bedfecb9c86f7fc71eb6cc724f507a3f0aaaa69ea2ba6 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zgb 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 565bedfecb9c86f7fc71eb6cc724f507a3f0aaaa69ea2ba6 2 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 565bedfecb9c86f7fc71eb6cc724f507a3f0aaaa69ea2ba6 2 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=565bedfecb9c86f7fc71eb6cc724f507a3f0aaaa69ea2ba6 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zgb 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zgb 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zgb 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40e706e672fc33501aa07439a8ffa0cc 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Blu 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40e706e672fc33501aa07439a8ffa0cc 0 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40e706e672fc33501aa07439a8ffa0cc 0 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40e706e672fc33501aa07439a8ffa0cc 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Blu 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Blu 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Blu 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f9eef7e56af6a7e393409c8783aeb802e21b74a5f611e732527f8005e39ab92 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Fn5 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f9eef7e56af6a7e393409c8783aeb802e21b74a5f611e732527f8005e39ab92 3 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f9eef7e56af6a7e393409c8783aeb802e21b74a5f611e732527f8005e39ab92 3 00:31:19.106 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f9eef7e56af6a7e393409c8783aeb802e21b74a5f611e732527f8005e39ab92 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Fn5 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Fn5 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Fn5 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1166857 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1166857 ']' 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.107 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zL7 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DM4 ]] 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DM4 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eiC 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DmK ]] 00:31:19.365 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DmK 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UCw 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.C2h ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C2h 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zgb 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Blu ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Blu 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Fn5 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:19.623 02:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:20.995 Waiting for block devices as requested 00:31:20.995 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:20.995 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:20.995 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:20.995 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:21.251 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:21.251 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:21.251 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:21.251 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:21.508 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:21.508 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:21.508 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:21.765 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:21.765 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:21.765 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:21.765 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:22.022 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:22.022 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:22.588 No valid GPT data, bailing 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:22.588 00:31:22.588 Discovery Log Number of Records 2, Generation counter 2 00:31:22.588 =====Discovery Log Entry 0====== 00:31:22.588 trtype: tcp 00:31:22.588 adrfam: ipv4 00:31:22.588 subtype: current discovery subsystem 00:31:22.588 treq: not specified, sq flow control disable supported 00:31:22.588 portid: 1 00:31:22.588 trsvcid: 4420 00:31:22.588 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:22.588 traddr: 10.0.0.1 00:31:22.588 eflags: none 00:31:22.588 sectype: none 00:31:22.588 =====Discovery Log Entry 1====== 00:31:22.588 trtype: tcp 00:31:22.588 adrfam: ipv4 00:31:22.588 subtype: nvme subsystem 00:31:22.588 treq: not specified, sq flow control disable supported 00:31:22.588 portid: 1 00:31:22.588 trsvcid: 4420 00:31:22.588 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:22.588 traddr: 10.0.0.1 00:31:22.588 eflags: none 00:31:22.588 sectype: none 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:22.588 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.589 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.847 nvme0n1 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.847 02:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 nvme0n1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 nvme0n1 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.105 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 nvme0n1 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.363 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.621 nvme0n1 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:23.621 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.622 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.881 nvme0n1 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.881 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.882 02:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.140 nvme0n1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.140 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.397 nvme0n1 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:24.397 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.398 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.655 nvme0n1 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:24.655 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.656 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.913 nvme0n1 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.913 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:24.914 02:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.914 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.173 nvme0n1 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.173 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.431 nvme0n1 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:25.431 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.432 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.689 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:25.689 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.689 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.946 nvme0n1 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.947 02:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.205 nvme0n1 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.205 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.463 nvme0n1 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.463 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.721 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.980 nvme0n1 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.980 02:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.980 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 nvme0n1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.546 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:27.547 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.547 02:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.110 nvme0n1 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.110 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.111 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.675 nvme0n1 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.675 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.676 02:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.243 nvme0n1 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.243 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.811 nvme0n1 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.811 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.812 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.812 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.812 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.812 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.812 02:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.746 nvme0n1 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.746 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.005 02:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.940 nvme0n1 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.940 02:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.873 nvme0n1 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:32.873 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.874 02:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 nvme0n1 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.809 02:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.742 nvme0n1 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.742 02:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.000 nvme0n1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.000 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.264 nvme0n1 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.264 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.554 nvme0n1 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.554 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.555 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.814 nvme0n1 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.814 nvme0n1 00:31:35.814 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.815 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.815 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.815 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.815 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.815 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.072 02:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.072 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.072 nvme0n1 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.073 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.330 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.331 nvme0n1 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.331 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.588 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.589 nvme0n1 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.589 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.846 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.846 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.847 nvme0n1 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.847 02:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.847 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.105 nvme0n1 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.105 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.363 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.622 nvme0n1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.622 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.881 nvme0n1 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.881 02:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.881 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.140 nvme0n1 00:31:38.140 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.140 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.140 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.140 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.140 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.398 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 nvme0n1 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.656 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.657 02:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.915 nvme0n1 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.915 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.173 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.739 nvme0n1 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.739 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.740 02:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.305 nvme0n1 00:31:40.305 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.305 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.305 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.305 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.305 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.306 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.872 nvme0n1 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.872 02:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.438 nvme0n1 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.438 02:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.004 nvme0n1 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:42.004 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.005 02:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.378 nvme0n1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.378 02:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.311 nvme0n1 00:31:44.311 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.311 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.311 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.311 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.312 02:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.245 nvme0n1 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.245 02:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.177 nvme0n1 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.177 02:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.109 nvme0n1 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.109 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.110 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.367 nvme0n1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.367 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.625 nvme0n1 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.625 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.883 nvme0n1 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.883 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.884 02:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 nvme0n1 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 nvme0n1 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.141 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.399 nvme0n1 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.399 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:48.658 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.659 nvme0n1 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.659 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.916 02:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.916 nvme0n1 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.916 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.174 nvme0n1 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.174 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.432 nvme0n1 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.432 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.690 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.691 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.949 nvme0n1 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.949 02:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.949 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.950 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 nvme0n1 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.208 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.467 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.726 nvme0n1 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.726 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.727 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.986 nvme0n1 00:31:50.986 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.986 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.986 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.986 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.986 02:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.986 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.244 nvme0n1 00:31:51.244 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.245 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.503 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.070 nvme0n1 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.070 02:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:52.070 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.071 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.639 nvme0n1 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.639 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.640 02:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.204 nvme0n1 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.204 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.205 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.768 nvme0n1 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.768 02:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.331 nvme0n1 00:31:54.331 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.331 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.331 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.332 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVjOGJlMzFjOWY4YmEwNWM5Y2YyMTY3MDZmZjg1Mzfvd7Yr: 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: ]] 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVkZTNjNDE0NTA3ZGE3YTVmMmI4NGEyN2NmMDA0ZmJlYTY4Yjc5NDE3ZmRjNzk5ZGVkZWMyMDBjYmVkYThmNS2oQ6M=: 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.588 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.589 02:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 nvme0n1 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:55.520 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.521 02:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.451 nvme0n1 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.451 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2M1ZDNhYzIzZTdiNWRmNDkyYWM1YmI2MmNlOTA4YzK5OnHt: 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: ]] 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzc5NjM4Mzg3YTA2MTE3MTBlOGY4MWExODQ3NjI5MjOKY0st: 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.452 02:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.384 nvme0n1 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTY1YmVkZmVjYjljODZmN2ZjNzFlYjZjYzcyNGY1MDdhM2YwYWFhYTY5ZWEyYmE2H/c0Qw==: 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: ]] 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBlNzA2ZTY3MmZjMzM1MDFhYTA3NDM5YThmZmEwY2PgoPJk: 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:57.384 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.385 02:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 nvme0n1 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWY5ZWVmN2U1NmFmNmE3ZTM5MzQwOWM4NzgzYWViODAyZTIxYjc0YTVmNjExZTczMjUyN2Y4MDA1ZTM5YWI5MhGzddo=: 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.755 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.756 02:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.320 nvme0n1 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.320 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzczY2EzMTA3NjUwY2FhYzE5MzMwOGJlNzA3ZTE0NGY0ODIzYjc4NjFhMThiMWY2JBbfxA==: 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzhhZWQ3M2MxNjFiZDViMTM2MjkyMjA5M2YwOGFmYzQ4YmY5NjZmZjIyMjliNzgzKa5fEg==: 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 request: 00:31:59.579 { 00:31:59.579 "name": "nvme0", 00:31:59.579 "trtype": "tcp", 00:31:59.579 "traddr": "10.0.0.1", 00:31:59.579 "adrfam": "ipv4", 00:31:59.579 "trsvcid": "4420", 00:31:59.579 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:59.579 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:59.579 "prchk_reftag": false, 00:31:59.579 "prchk_guard": false, 00:31:59.579 "hdgst": false, 00:31:59.579 "ddgst": false, 00:31:59.579 "method": "bdev_nvme_attach_controller", 00:31:59.579 "req_id": 1 00:31:59.579 } 00:31:59.579 Got JSON-RPC error response 00:31:59.579 response: 00:31:59.579 { 00:31:59.579 "code": -5, 00:31:59.579 "message": "Input/output error" 00:31:59.579 } 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.579 request: 00:31:59.579 { 00:31:59.579 "name": "nvme0", 00:31:59.579 "trtype": "tcp", 00:31:59.579 "traddr": "10.0.0.1", 00:31:59.579 "adrfam": "ipv4", 00:31:59.579 "trsvcid": "4420", 00:31:59.579 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:59.579 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:59.579 "prchk_reftag": false, 00:31:59.579 "prchk_guard": false, 00:31:59.579 "hdgst": false, 00:31:59.579 "ddgst": false, 00:31:59.579 "dhchap_key": "key2", 00:31:59.579 "method": "bdev_nvme_attach_controller", 00:31:59.579 "req_id": 1 00:31:59.579 } 00:31:59.579 Got JSON-RPC error response 00:31:59.579 response: 00:31:59.579 { 00:31:59.579 "code": -5, 00:31:59.579 "message": "Input/output error" 00:31:59.579 } 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:59.579 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:59.580 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.580 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:59.580 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.580 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.580 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.839 request: 00:31:59.839 { 00:31:59.839 "name": "nvme0", 00:31:59.839 "trtype": "tcp", 00:31:59.839 "traddr": "10.0.0.1", 00:31:59.839 "adrfam": "ipv4", 00:31:59.839 "trsvcid": "4420", 00:31:59.839 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:59.839 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:59.839 "prchk_reftag": false, 00:31:59.839 "prchk_guard": false, 00:31:59.839 "hdgst": false, 00:31:59.839 "ddgst": false, 00:31:59.839 "dhchap_key": "key1", 00:31:59.839 "dhchap_ctrlr_key": "ckey2", 00:31:59.839 "method": "bdev_nvme_attach_controller", 00:31:59.839 "req_id": 1 00:31:59.839 } 00:31:59.839 Got JSON-RPC error response 00:31:59.839 response: 00:31:59.839 { 00:31:59.839 "code": -5, 00:31:59.839 "message": "Input/output error" 00:31:59.839 } 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:59.839 rmmod nvme_tcp 00:31:59.839 rmmod nvme_fabrics 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1166857 ']' 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1166857 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1166857 ']' 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1166857 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1166857 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.839 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1166857' 00:31:59.839 killing process with pid 1166857 00:31:59.840 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1166857 00:31:59.840 02:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1166857 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.099 02:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:02.000 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:02.259 02:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:03.193 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.193 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.193 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.451 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.451 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.451 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.451 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.451 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.451 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:04.386 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:04.386 02:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zL7 /tmp/spdk.key-null.eiC /tmp/spdk.key-sha256.UCw /tmp/spdk.key-sha384.zgb /tmp/spdk.key-sha512.Fn5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:04.386 02:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:05.758 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:05.758 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:05.758 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:05.758 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:05.758 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:05.758 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:05.758 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:05.758 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:05.758 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:05.758 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:05.758 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:05.758 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:05.758 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:05.758 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:05.758 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:05.758 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:05.758 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:05.758 00:32:05.758 real 0m49.300s 00:32:05.758 user 0m47.084s 00:32:05.758 sys 0m5.831s 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.758 ************************************ 00:32:05.758 END TEST nvmf_auth_host 00:32:05.758 ************************************ 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.758 ************************************ 00:32:05.758 START TEST nvmf_digest 00:32:05.758 ************************************ 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:05.758 * Looking for test storage... 00:32:05.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.758 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:05.759 02:31:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:07.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:07.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:07.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:07.687 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:07.687 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.688 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:32:07.946 00:32:07.946 --- 10.0.0.2 ping statistics --- 00:32:07.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.946 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:32:07.946 00:32:07.946 --- 10.0.0.1 ping statistics --- 00:32:07.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.946 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.946 ************************************ 00:32:07.946 START TEST nvmf_digest_clean 00:32:07.946 ************************************ 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1178281 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1178281 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1178281 ']' 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.946 02:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:07.946 [2024-07-27 02:31:36.039215] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:07.946 [2024-07-27 02:31:36.039304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.946 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.946 [2024-07-27 02:31:36.076324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:07.946 [2024-07-27 02:31:36.102591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.204 [2024-07-27 02:31:36.187330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.204 [2024-07-27 02:31:36.187397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.204 [2024-07-27 02:31:36.187410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.204 [2024-07-27 02:31:36.187421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.204 [2024-07-27 02:31:36.187431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.204 [2024-07-27 02:31:36.187456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.204 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:08.463 null0 00:32:08.463 [2024-07-27 02:31:36.384517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.463 [2024-07-27 02:31:36.408758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1178307 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1178307 /var/tmp/bperf.sock 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1178307 ']' 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.463 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:08.463 [2024-07-27 02:31:36.453492] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:08.463 [2024-07-27 02:31:36.453569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178307 ] 00:32:08.463 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.463 [2024-07-27 02:31:36.485927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:08.463 [2024-07-27 02:31:36.516156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.463 [2024-07-27 02:31:36.607299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.720 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.720 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:08.720 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:08.720 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:08.721 02:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:08.979 02:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.979 02:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.545 nvme0n1 00:32:09.545 02:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:09.545 02:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.545 Running I/O for 2 seconds... 00:32:12.074 00:32:12.074 Latency(us) 00:32:12.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.074 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:12.074 nvme0n1 : 2.00 18864.50 73.69 0.00 0.00 6775.28 3228.25 20097.71 00:32:12.074 =================================================================================================================== 00:32:12.074 Total : 18864.50 73.69 0.00 0.00 6775.28 3228.25 20097.71 00:32:12.074 0 00:32:12.074 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:12.074 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:12.074 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:12.075 | select(.opcode=="crc32c") 00:32:12.075 | "\(.module_name) \(.executed)"' 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1178307 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1178307 ']' 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1178307 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1178307 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1178307' 00:32:12.075 killing process with pid 1178307 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1178307 00:32:12.075 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.075 00:32:12.075 Latency(us) 00:32:12.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.075 =================================================================================================================== 00:32:12.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.075 02:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1178307 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1178712 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1178712 /var/tmp/bperf.sock 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1178712 ']' 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.075 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.075 [2024-07-27 02:31:40.221512] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:12.075 [2024-07-27 02:31:40.221591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178712 ] 00:32:12.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:12.075 Zero copy mechanism will not be used. 00:32:12.333 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.333 [2024-07-27 02:31:40.253133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:12.333 [2024-07-27 02:31:40.283426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.333 [2024-07-27 02:31:40.377458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.333 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.333 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:12.333 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:12.333 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:12.333 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:12.899 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:12.899 02:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.157 nvme0n1 00:32:13.157 02:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:13.157 02:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:13.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:13.157 Zero copy mechanism will not be used. 00:32:13.157 Running I/O for 2 seconds... 00:32:15.681 00:32:15.681 Latency(us) 00:32:15.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.681 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:15.681 nvme0n1 : 2.01 2729.10 341.14 0.00 0.00 5857.98 5606.97 14563.56 00:32:15.681 =================================================================================================================== 00:32:15.681 Total : 2729.10 341.14 0.00 0.00 5857.98 5606.97 14563.56 00:32:15.681 0 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:15.681 | select(.opcode=="crc32c") 00:32:15.681 | "\(.module_name) \(.executed)"' 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1178712 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1178712 ']' 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1178712 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1178712 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1178712' 00:32:15.681 killing process with pid 1178712 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1178712 00:32:15.681 Received shutdown signal, test time was about 2.000000 seconds 00:32:15.681 00:32:15.681 Latency(us) 00:32:15.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.681 =================================================================================================================== 00:32:15.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1178712 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1179118 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1179118 /var/tmp/bperf.sock 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1179118 ']' 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.681 02:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:15.681 [2024-07-27 02:31:43.820588] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:15.681 [2024-07-27 02:31:43.820664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179118 ] 00:32:15.939 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.939 [2024-07-27 02:31:43.854167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:15.939 [2024-07-27 02:31:43.886669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.939 [2024-07-27 02:31:43.983482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.939 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:15.939 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:15.939 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:15.939 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:15.939 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:16.504 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.504 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.761 nvme0n1 00:32:16.761 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:16.761 02:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.761 Running I/O for 2 seconds... 00:32:19.289 00:32:19.289 Latency(us) 00:32:19.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.289 nvme0n1 : 2.01 21168.82 82.69 0.00 0.00 6036.77 4369.07 15922.82 00:32:19.289 =================================================================================================================== 00:32:19.289 Total : 21168.82 82.69 0.00 0.00 6036.77 4369.07 15922.82 00:32:19.289 0 00:32:19.289 02:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:19.289 02:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:19.289 02:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:19.289 02:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:19.289 | select(.opcode=="crc32c") 00:32:19.289 | "\(.module_name) \(.executed)"' 00:32:19.289 02:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1179118 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1179118 ']' 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1179118 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1179118 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1179118' 00:32:19.289 killing process with pid 1179118 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1179118 00:32:19.289 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.289 00:32:19.289 Latency(us) 00:32:19.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.289 =================================================================================================================== 00:32:19.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1179118 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:19.289 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1179639 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1179639 /var/tmp/bperf.sock 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1179639 ']' 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.290 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:19.290 [2024-07-27 02:31:47.379609] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:19.290 [2024-07-27 02:31:47.379702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179639 ] 00:32:19.290 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.290 Zero copy mechanism will not be used. 00:32:19.290 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.290 [2024-07-27 02:31:47.411952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:19.290 [2024-07-27 02:31:47.439915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.547 [2024-07-27 02:31:47.531276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.547 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.547 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:19.547 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:19.547 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:19.547 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:20.112 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.112 02:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.371 nvme0n1 00:32:20.371 02:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:20.371 02:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:20.628 Zero copy mechanism will not be used. 00:32:20.628 Running I/O for 2 seconds... 00:32:22.526 00:32:22.526 Latency(us) 00:32:22.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:22.526 nvme0n1 : 2.01 1287.26 160.91 0.00 0.00 12394.52 9077.95 20971.52 00:32:22.526 =================================================================================================================== 00:32:22.526 Total : 1287.26 160.91 0.00 0.00 12394.52 9077.95 20971.52 00:32:22.526 0 00:32:22.526 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:22.526 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:22.526 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:22.526 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:22.526 | select(.opcode=="crc32c") 00:32:22.526 | "\(.module_name) \(.executed)"' 00:32:22.526 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1179639 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1179639 ']' 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1179639 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1179639 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1179639' 00:32:22.783 killing process with pid 1179639 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1179639 00:32:22.783 Received shutdown signal, test time was about 2.000000 seconds 00:32:22.783 00:32:22.783 Latency(us) 00:32:22.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.783 =================================================================================================================== 00:32:22.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.783 02:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1179639 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1178281 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1178281 ']' 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1178281 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1178281 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1178281' 00:32:23.040 killing process with pid 1178281 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1178281 00:32:23.040 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1178281 00:32:23.298 00:32:23.298 real 0m15.331s 00:32:23.298 user 0m29.872s 00:32:23.298 sys 0m4.253s 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.298 ************************************ 00:32:23.298 END TEST nvmf_digest_clean 00:32:23.298 ************************************ 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.298 ************************************ 00:32:23.298 START TEST nvmf_digest_error 00:32:23.298 ************************************ 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1180076 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1180076 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1180076 ']' 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.298 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.298 [2024-07-27 02:31:51.420841] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:23.298 [2024-07-27 02:31:51.420920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.298 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.298 [2024-07-27 02:31:51.458672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.557 [2024-07-27 02:31:51.485274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.557 [2024-07-27 02:31:51.570460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.557 [2024-07-27 02:31:51.570515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.557 [2024-07-27 02:31:51.570536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.557 [2024-07-27 02:31:51.570546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.557 [2024-07-27 02:31:51.570556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.557 [2024-07-27 02:31:51.570596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.557 [2024-07-27 02:31:51.663231] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.557 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.815 null0 00:32:23.815 [2024-07-27 02:31:51.781824] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.815 [2024-07-27 02:31:51.806045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1180115 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1180115 /var/tmp/bperf.sock 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1180115 ']' 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.815 02:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:23.815 [2024-07-27 02:31:51.852624] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:23.815 [2024-07-27 02:31:51.852687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180115 ] 00:32:23.815 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.815 [2024-07-27 02:31:51.886239] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:23.815 [2024-07-27 02:31:51.916832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.073 [2024-07-27 02:31:52.009565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.073 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.073 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:24.073 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:24.073 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.331 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.589 nvme0n1 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:24.589 02:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.847 Running I/O for 2 seconds... 00:32:24.847 [2024-07-27 02:31:52.862882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.862925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.862949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.876494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.876525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.876547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.889831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.889862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.889886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.901857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.901888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.901907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.916476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.916512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.916540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.928768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.928800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.928819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.941654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.941685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.941706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.954414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.954458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.954491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.967485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.967524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.967543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.979596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.979646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:52.992358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:52.992396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:52.992413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.847 [2024-07-27 02:31:53.004295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:24.847 [2024-07-27 02:31:53.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.847 [2024-07-27 02:31:53.004345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.019619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.019650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.019683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.032924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.032956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.032974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.045018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.045073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.045091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.059684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.059715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.059734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.074159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.074208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.085610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.085642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.085661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.098608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.098661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.098684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.111314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.111355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.111372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.122693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.122724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.122741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.135212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.135245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.135280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.147969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.148000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.148016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.160522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.160553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.173094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.173125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.173159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.184069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.184099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.184146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.197565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.197609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.197626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.210104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.210134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.210153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.220435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.220463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.220480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.235032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.235079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.235096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.248348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.248378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.248394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.106 [2024-07-27 02:31:53.261497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.106 [2024-07-27 02:31:53.261527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.106 [2024-07-27 02:31:53.261544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.273889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.273918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.273935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.288880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.288910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.288929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.301375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.301409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.301433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.315999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.316033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.316053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.327015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.327050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.327080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.341631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.341662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.341680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.354826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.354876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.366558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.366592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.366617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.379558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.379591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.379611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.393698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.393733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.393753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.406658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.406688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.419159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.419190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.419208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.432665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.432699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.445343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.445374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.445397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.458499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.458529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.458546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.471501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.471531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.471549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.483710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.483740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.483758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.497722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.497752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.497769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.509984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.510027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.510046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.365 [2024-07-27 02:31:53.522471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.365 [2024-07-27 02:31:53.522509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.365 [2024-07-27 02:31:53.522526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.535854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.535889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.549218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.549250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.549269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.561700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.561729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.561760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.575834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.575864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.575881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.588207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.588237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.602889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.602919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.602939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.615998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.616033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.627573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.627603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.627622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.641129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.623 [2024-07-27 02:31:53.641159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.623 [2024-07-27 02:31:53.641176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.623 [2024-07-27 02:31:53.653928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.653963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.666752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.666786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.666805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.680399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.680429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.680448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.692982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.693012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.693029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.706401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.706436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.706454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.719828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.719862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.719882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.730654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.730685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.730703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.744284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.744314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.744339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.760276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.760305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.760321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.624 [2024-07-27 02:31:53.771229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.624 [2024-07-27 02:31:53.771257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.624 [2024-07-27 02:31:53.771273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.786923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.786977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.798834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.798869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.798889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.811489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.811525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.811543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.823825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.823879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.837758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.837789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.837805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.851250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.851281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.851298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.864121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.864151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.864168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.877283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.877314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.877337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.891435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.891465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.891482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.905248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.905280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.905297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.917547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.917577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.917595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.930847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.930883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.930903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.943986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.944022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.955914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.955950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.955969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.969749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.969780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.969805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.981473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.981504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.981521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:53.995654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:53.995701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:53.995718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:54.007945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:54.007976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.913 [2024-07-27 02:31:54.007993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.913 [2024-07-27 02:31:54.022411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.913 [2024-07-27 02:31:54.022440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-07-27 02:31:54.022456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.914 [2024-07-27 02:31:54.033931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.914 [2024-07-27 02:31:54.033962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-07-27 02:31:54.033979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.914 [2024-07-27 02:31:54.047029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:25.914 [2024-07-27 02:31:54.047070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.914 [2024-07-27 02:31:54.047090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.060526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.060557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.060574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.072545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.072604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.072631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.086397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.086453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.086473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.100468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.100503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.100525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.113105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.113135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.113151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.127779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.127810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.127827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.140020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.140055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.140085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.152632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.152667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.152686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.167104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.167137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.167155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.178965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.179001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.179020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.192144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.192191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.205127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.205158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.205174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.220880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.220912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.220930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.232083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.232132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.232150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.247791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.172 [2024-07-27 02:31:54.247822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.172 [2024-07-27 02:31:54.247838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.172 [2024-07-27 02:31:54.259316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.259345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.259361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.173 [2024-07-27 02:31:54.273559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.273591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.273608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.173 [2024-07-27 02:31:54.286699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.286730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.286747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.173 [2024-07-27 02:31:54.300651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.300686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.300706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.173 [2024-07-27 02:31:54.313939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.313973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.314005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.173 [2024-07-27 02:31:54.326548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.173 [2024-07-27 02:31:54.326579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.173 [2024-07-27 02:31:54.326596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.340328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.340374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.340392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.352495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.352526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.352542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.365885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.365916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.365933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.377163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.377192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.377209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.390692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.390741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.390768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.405616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.405647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.405664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.417424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.417468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.429 [2024-07-27 02:31:54.417484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.429 [2024-07-27 02:31:54.433778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.429 [2024-07-27 02:31:54.433822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.433843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.445188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.445217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.445233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.459935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.459984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.460004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.473628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.473663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.473682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.485748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.485783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.499779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.499814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.499834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.512967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.513000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.513018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.525716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.525750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.525770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.538027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.538082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.538103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.551583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.551615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.551631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.564167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.564199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.564216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.430 [2024-07-27 02:31:54.577823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.430 [2024-07-27 02:31:54.577859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.430 [2024-07-27 02:31:54.577879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.590583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.590615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.590633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.604394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.604424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.604441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.616469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.616499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.616516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.630849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.630884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.630903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.642324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.642353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.642387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.657144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.657182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.657200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.669248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.669281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.669299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.683183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.683215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.683232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.697315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.697346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.697363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.709180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.709210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.709227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.722708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.722778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.735250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.735281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.735298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.748632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.748670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.748689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.761502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.761535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.761552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.774447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.774479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.774497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.787555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.787602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.787621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.801299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.687 [2024-07-27 02:31:54.801331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.687 [2024-07-27 02:31:54.801349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.687 [2024-07-27 02:31:54.812881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.688 [2024-07-27 02:31:54.812913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.688 [2024-07-27 02:31:54.812930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.688 [2024-07-27 02:31:54.824712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.688 [2024-07-27 02:31:54.824744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.688 [2024-07-27 02:31:54.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.688 [2024-07-27 02:31:54.838262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226c280) 00:32:26.688 [2024-07-27 02:31:54.838297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.688 [2024-07-27 02:31:54.838315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.945 00:32:26.945 Latency(us) 00:32:26.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.945 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:26.945 nvme0n1 : 2.00 19445.08 75.96 0.00 0.00 6573.52 3422.44 21845.33 00:32:26.945 =================================================================================================================== 00:32:26.945 Total : 19445.08 75.96 0.00 0.00 6573.52 3422.44 21845.33 00:32:26.945 0 00:32:26.945 02:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:26.945 02:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:26.945 02:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:26.945 | .driver_specific 00:32:26.945 | .nvme_error 00:32:26.945 | .status_code 00:32:26.945 | .command_transient_transport_error' 00:32:26.945 02:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1180115 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1180115 ']' 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1180115 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1180115 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1180115' 00:32:27.203 killing process with pid 1180115 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1180115 00:32:27.203 Received shutdown signal, test time was about 2.000000 seconds 00:32:27.203 00:32:27.203 Latency(us) 00:32:27.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.203 =================================================================================================================== 00:32:27.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1180115 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1180625 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1180625 /var/tmp/bperf.sock 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1180625 ']' 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.203 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.461 [2024-07-27 02:31:55.403756] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:27.461 [2024-07-27 02:31:55.403835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180625 ] 00:32:27.461 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.461 Zero copy mechanism will not be used. 00:32:27.461 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.461 [2024-07-27 02:31:55.437748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:27.461 [2024-07-27 02:31:55.466389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.461 [2024-07-27 02:31:55.556666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.719 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.719 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:27.719 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.719 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.977 02:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:28.235 nvme0n1 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:28.493 02:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:28.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:28.493 Zero copy mechanism will not be used. 00:32:28.493 Running I/O for 2 seconds... 00:32:28.493 [2024-07-27 02:31:56.541230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.541282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.541303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.553922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.553958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.553978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.566565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.566621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.579272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.579303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.579321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.591921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.591975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.604809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.604844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.604863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.617486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.617521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.617541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.629987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.630023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.630043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.493 [2024-07-27 02:31:56.642621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.493 [2024-07-27 02:31:56.642656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.493 [2024-07-27 02:31:56.642676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.655251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.655281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.667954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.667988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.680621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.680655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.680675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.693277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.693306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.693323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.705963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.705997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.706017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.718538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.718573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.718592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.731148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.731178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.731196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.743641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.743674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.743694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.756040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.756082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.756116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.768536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.768570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.768589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.781055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.781117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.793857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.793891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.793910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.806513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.806548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.806568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.819120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.819151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.831948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.831983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.832003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.844610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.844643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.844662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.857149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.857179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.857197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.869801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.869835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.869854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.882381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.882428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.882448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.894267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.894299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.894317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.752 [2024-07-27 02:31:56.906699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:28.752 [2024-07-27 02:31:56.906734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.752 [2024-07-27 02:31:56.906754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.011 [2024-07-27 02:31:56.918627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.011 [2024-07-27 02:31:56.918658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.011 [2024-07-27 02:31:56.918675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.011 [2024-07-27 02:31:56.930134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.011 [2024-07-27 02:31:56.930164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.011 [2024-07-27 02:31:56.930182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.011 [2024-07-27 02:31:56.941747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.011 [2024-07-27 02:31:56.941778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.011 [2024-07-27 02:31:56.941795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.011 [2024-07-27 02:31:56.953399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.011 [2024-07-27 02:31:56.953429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.011 [2024-07-27 02:31:56.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:56.964987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:56.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:56.965035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:56.976533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:56.976563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:56.976580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:56.988045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:56.988083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:56.988111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:56.999756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:56.999788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:56.999805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.011354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.011401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.011419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.022865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.022897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.034850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.034899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.046638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.046671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.046688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.058319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.058352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.058384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.069811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.069841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.069858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.081342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.081405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.092759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.092790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.092806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.104153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.104199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.104217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.115789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.115819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.115836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.127313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.127344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.127361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.138853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.138884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.138900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.150485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.150516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.150548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.012 [2024-07-27 02:31:57.162008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.012 [2024-07-27 02:31:57.162038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.012 [2024-07-27 02:31:57.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.271 [2024-07-27 02:31:57.173594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.271 [2024-07-27 02:31:57.173641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.271 [2024-07-27 02:31:57.173658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.271 [2024-07-27 02:31:57.185194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.271 [2024-07-27 02:31:57.185225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.271 [2024-07-27 02:31:57.185252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.271 [2024-07-27 02:31:57.196728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.271 [2024-07-27 02:31:57.196757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.271 [2024-07-27 02:31:57.196774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.271 [2024-07-27 02:31:57.208312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.271 [2024-07-27 02:31:57.208359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.271 [2024-07-27 02:31:57.208376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.271 [2024-07-27 02:31:57.219846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.271 [2024-07-27 02:31:57.219877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.271 [2024-07-27 02:31:57.219894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.231477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.231508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.231525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.242919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.242967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.242984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.254513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.254544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.254576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.266137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.266169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.266187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.277612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.277644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.277676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.289161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.289201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.289220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.300942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.300972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.312430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.312463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.312480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.324007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.324053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.324081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.335589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.335620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.335653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.347036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.347077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.347096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.358594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.358641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.358659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.370139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.370171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.382190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.382223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.382240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.393864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.393895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.393912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.405452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.405517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.417072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.417106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.417123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.272 [2024-07-27 02:31:57.428710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.272 [2024-07-27 02:31:57.428742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.272 [2024-07-27 02:31:57.428759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.531 [2024-07-27 02:31:57.440294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.531 [2024-07-27 02:31:57.440326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.531 [2024-07-27 02:31:57.440343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.531 [2024-07-27 02:31:57.451929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.531 [2024-07-27 02:31:57.451959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.531 [2024-07-27 02:31:57.451977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.531 [2024-07-27 02:31:57.463850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.531 [2024-07-27 02:31:57.463897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.531 [2024-07-27 02:31:57.463914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.531 [2024-07-27 02:31:57.475463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.531 [2024-07-27 02:31:57.475509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.475527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.486978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.487014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.487033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.498728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.498774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.498791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.510560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.510592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.510609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.522543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.522576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.534086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.534119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.545640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.545671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.545704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.557183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.557215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.557233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.568784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.568831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.568848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.580833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.580867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.580900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.592392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.592424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.592441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.604393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.604456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.616039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.616093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.616121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.627702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.627734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.627766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.639307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.639381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.650858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.650889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.662489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.662520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.662537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.674210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.674242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.674260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.532 [2024-07-27 02:31:57.685883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.532 [2024-07-27 02:31:57.685914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.532 [2024-07-27 02:31:57.685942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.697522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.697569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.709386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.709433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.720930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.720993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.733083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.733139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.733157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.744900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.744947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.744965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.756533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.756564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.756581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.768142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.768188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.768206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.779763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.779793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.779811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.791308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.791362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.791380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.802996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.803064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.814691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.814722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.826283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.826316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.826334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.837883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.837915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.837932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.849556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.849590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.849621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.861580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.861613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.861630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.873139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.791 [2024-07-27 02:31:57.873170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.791 [2024-07-27 02:31:57.873188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.791 [2024-07-27 02:31:57.884877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.884923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.884940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.792 [2024-07-27 02:31:57.896492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.896524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.792 [2024-07-27 02:31:57.907967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.907998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.908015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.792 [2024-07-27 02:31:57.919626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.919656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.919673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.792 [2024-07-27 02:31:57.931278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.931310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.931327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.792 [2024-07-27 02:31:57.942806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:29.792 [2024-07-27 02:31:57.942836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.792 [2024-07-27 02:31:57.942853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:57.954314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:57.954346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:57.954364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:57.966048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:57.966102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:57.966127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:57.977892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:57.977922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:57.977939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:57.989594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:57.989633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:57.989666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.001179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.001211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.001228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.012739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.012769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.012786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.024250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.024282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.024299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.035905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.035935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.035952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.047906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.047936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.047954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.059556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.059588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.059621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.071390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.071440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.071460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.084079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.084137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.084154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.096849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.096883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.096903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.109406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.109472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.121978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.122011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.122030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.134692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.134745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.147586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.147620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.160454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.160488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.160508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.172944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.050 [2024-07-27 02:31:58.172979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.050 [2024-07-27 02:31:58.172999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.050 [2024-07-27 02:31:58.185480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.051 [2024-07-27 02:31:58.185514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.051 [2024-07-27 02:31:58.185533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.051 [2024-07-27 02:31:58.198000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.051 [2024-07-27 02:31:58.198034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.051 [2024-07-27 02:31:58.198068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.210720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.210754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.210773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.223341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.223407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.235891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.235925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.235944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.248758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.248792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.248811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.261603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.261657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.274379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.274407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.274439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.286937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.286970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.286989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.299679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.299712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.299732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.312277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.312311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.312329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.324814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.324848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.324867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.337397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.337429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.337463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.350156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.350200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.362974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.363008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.363028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.375870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.375922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.388663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.388697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.388717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.401272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.401302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.401319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.413797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.309 [2024-07-27 02:31:58.413830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.309 [2024-07-27 02:31:58.413856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.309 [2024-07-27 02:31:58.426623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.310 [2024-07-27 02:31:58.426657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.310 [2024-07-27 02:31:58.426676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.310 [2024-07-27 02:31:58.439328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.310 [2024-07-27 02:31:58.439357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.310 [2024-07-27 02:31:58.439389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.310 [2024-07-27 02:31:58.451883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.310 [2024-07-27 02:31:58.451917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.310 [2024-07-27 02:31:58.451936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.310 [2024-07-27 02:31:58.464418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.310 [2024-07-27 02:31:58.464452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.310 [2024-07-27 02:31:58.464472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.568 [2024-07-27 02:31:58.477038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.568 [2024-07-27 02:31:58.477079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.568 [2024-07-27 02:31:58.477113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.568 [2024-07-27 02:31:58.489722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.568 [2024-07-27 02:31:58.489755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.568 [2024-07-27 02:31:58.489775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.568 [2024-07-27 02:31:58.502276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.568 [2024-07-27 02:31:58.502307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.568 [2024-07-27 02:31:58.502324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.568 [2024-07-27 02:31:58.514930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.568 [2024-07-27 02:31:58.514965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.568 [2024-07-27 02:31:58.514985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.568 [2024-07-27 02:31:58.527595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12df390) 00:32:30.568 [2024-07-27 02:31:58.527635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.568 [2024-07-27 02:31:58.527655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.568 00:32:30.568 Latency(us) 00:32:30.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.568 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:30.568 nvme0n1 : 2.00 2576.49 322.06 0.00 0.00 6205.63 5582.70 13204.29 00:32:30.568 =================================================================================================================== 00:32:30.568 Total : 2576.49 322.06 0.00 0.00 6205.63 5582.70 13204.29 00:32:30.568 0 00:32:30.568 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:30.568 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:30.568 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:30.568 | .driver_specific 00:32:30.568 | .nvme_error 00:32:30.568 | .status_code 00:32:30.568 | .command_transient_transport_error' 00:32:30.568 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1180625 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1180625 ']' 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1180625 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1180625 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1180625' 00:32:30.827 killing process with pid 1180625 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1180625 00:32:30.827 Received shutdown signal, test time was about 2.000000 seconds 00:32:30.827 00:32:30.827 Latency(us) 00:32:30.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.827 =================================================================================================================== 00:32:30.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.827 02:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1180625 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1181036 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1181036 /var/tmp/bperf.sock 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1181036 ']' 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:31.086 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:31.086 [2024-07-27 02:31:59.121024] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:31.086 [2024-07-27 02:31:59.121145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181036 ] 00:32:31.086 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.086 [2024-07-27 02:31:59.152943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:31.086 [2024-07-27 02:31:59.180222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.345 [2024-07-27 02:31:59.270293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.345 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.345 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:31.345 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:31.345 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.603 02:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:32.170 nvme0n1 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:32.170 02:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:32.170 Running I/O for 2 seconds... 00:32:32.170 [2024-07-27 02:32:00.176990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f7da8 00:32:32.170 [2024-07-27 02:32:00.178042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.190492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1b48 00:32:32.170 [2024-07-27 02:32:00.191518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.191554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.205305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190fe2e8 00:32:32.170 [2024-07-27 02:32:00.206447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.206475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.220180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e6fa8 00:32:32.170 [2024-07-27 02:32:00.221520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.221548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.234802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e8088 00:32:32.170 [2024-07-27 02:32:00.236296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.236324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.249499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190de470 00:32:32.170 [2024-07-27 02:32:00.251187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.251217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.264137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ea680 00:32:32.170 [2024-07-27 02:32:00.265976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.278662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f6cc8 00:32:32.170 [2024-07-27 02:32:00.280674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.280702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.293225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5378 00:32:32.170 [2024-07-27 02:32:00.295377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.295405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.303078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eb760 00:32:32.170 [2024-07-27 02:32:00.304078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.304105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:32.170 [2024-07-27 02:32:00.317499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0350 00:32:32.170 [2024-07-27 02:32:00.318679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.170 [2024-07-27 02:32:00.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.331608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ef270 00:32:32.429 [2024-07-27 02:32:00.332799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.332833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.345336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ee190 00:32:32.429 [2024-07-27 02:32:00.346544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.346572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.358970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f35f0 00:32:32.429 [2024-07-27 02:32:00.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.360210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.372598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f2510 00:32:32.429 [2024-07-27 02:32:00.373796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.373823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.386274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f1430 00:32:32.429 [2024-07-27 02:32:00.387437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.387464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.399966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5be8 00:32:32.429 [2024-07-27 02:32:00.401174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.401211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.413479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e8d30 00:32:32.429 [2024-07-27 02:32:00.414671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.414701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.427048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9e10 00:32:32.429 [2024-07-27 02:32:00.428256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.428284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.440741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e38d0 00:32:32.429 [2024-07-27 02:32:00.442169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.442203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.454629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8a50 00:32:32.429 [2024-07-27 02:32:00.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.455845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.468106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190df550 00:32:32.429 [2024-07-27 02:32:00.469295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.469322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.481818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5378 00:32:32.429 [2024-07-27 02:32:00.483016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.483043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.495342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f4298 00:32:32.429 [2024-07-27 02:32:00.496548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.496576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.508978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7818 00:32:32.429 [2024-07-27 02:32:00.510168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.510199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.522629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1b48 00:32:32.429 [2024-07-27 02:32:00.523835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.523862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.536165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0a68 00:32:32.429 [2024-07-27 02:32:00.537327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.537355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.549769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eff18 00:32:32.429 [2024-07-27 02:32:00.550962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.550990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.563420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eee38 00:32:32.429 [2024-07-27 02:32:00.564619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.564648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.429 [2024-07-27 02:32:00.577021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190edd58 00:32:32.429 [2024-07-27 02:32:00.578239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.429 [2024-07-27 02:32:00.578267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.590850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f31b8 00:32:32.687 [2024-07-27 02:32:00.592113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.592143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.604560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f20d8 00:32:32.687 [2024-07-27 02:32:00.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.605785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.618177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0ff8 00:32:32.687 [2024-07-27 02:32:00.619352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.619379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.631800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f6020 00:32:32.687 [2024-07-27 02:32:00.633005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.633032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.645489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9168 00:32:32.687 [2024-07-27 02:32:00.646692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.646722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.659157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e3498 00:32:32.687 [2024-07-27 02:32:00.660363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.660392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.672836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8618 00:32:32.687 [2024-07-27 02:32:00.674066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.674094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.686490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f96f8 00:32:32.687 [2024-07-27 02:32:00.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.700225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e01f8 00:32:32.687 [2024-07-27 02:32:00.701406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.701434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.713973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f46d0 00:32:32.687 [2024-07-27 02:32:00.715166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.715204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.727665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7c50 00:32:32.687 [2024-07-27 02:32:00.728884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.728912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.741368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1f80 00:32:32.687 [2024-07-27 02:32:00.742575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.742603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.755125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0ea0 00:32:32.687 [2024-07-27 02:32:00.756309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.756362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.768936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0350 00:32:32.687 [2024-07-27 02:32:00.770137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.687 [2024-07-27 02:32:00.770168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.687 [2024-07-27 02:32:00.782629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ef270 00:32:32.687 [2024-07-27 02:32:00.783836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.688 [2024-07-27 02:32:00.783863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.688 [2024-07-27 02:32:00.796163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ee190 00:32:32.688 [2024-07-27 02:32:00.797375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.688 [2024-07-27 02:32:00.797402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.688 [2024-07-27 02:32:00.809755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f35f0 00:32:32.688 [2024-07-27 02:32:00.810967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.688 [2024-07-27 02:32:00.810995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.688 [2024-07-27 02:32:00.823310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f2510 00:32:32.688 [2024-07-27 02:32:00.824520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.688 [2024-07-27 02:32:00.824550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.688 [2024-07-27 02:32:00.836943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f1430 00:32:32.688 [2024-07-27 02:32:00.838155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.688 [2024-07-27 02:32:00.838186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.850678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5be8 00:32:32.946 [2024-07-27 02:32:00.851911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.851940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.864373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e8d30 00:32:32.946 [2024-07-27 02:32:00.865574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.865602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.878035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9e10 00:32:32.946 [2024-07-27 02:32:00.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.879278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.891642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e38d0 00:32:32.946 [2024-07-27 02:32:00.892838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.892866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.905313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8a50 00:32:32.946 [2024-07-27 02:32:00.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.906564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.919006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190df550 00:32:32.946 [2024-07-27 02:32:00.920229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.920257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.932631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5378 00:32:32.946 [2024-07-27 02:32:00.933871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.933904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.946303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f4298 00:32:32.946 [2024-07-27 02:32:00.947503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.947531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.959932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7818 00:32:32.946 [2024-07-27 02:32:00.961140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.946 [2024-07-27 02:32:00.961173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.946 [2024-07-27 02:32:00.973513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1b48 00:32:32.947 [2024-07-27 02:32:00.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:00.974759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:00.987246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0a68 00:32:32.947 [2024-07-27 02:32:00.988446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:00.988474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.000932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eff18 00:32:32.947 [2024-07-27 02:32:01.002148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.002179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.014504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eee38 00:32:32.947 [2024-07-27 02:32:01.015707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.028162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190edd58 00:32:32.947 [2024-07-27 02:32:01.029336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.029365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.041866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f31b8 00:32:32.947 [2024-07-27 02:32:01.043100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.043132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.055441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f20d8 00:32:32.947 [2024-07-27 02:32:01.056620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.056646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.068951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0ff8 00:32:32.947 [2024-07-27 02:32:01.070161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.070188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.082562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f6020 00:32:32.947 [2024-07-27 02:32:01.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.083787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.947 [2024-07-27 02:32:01.096242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9168 00:32:32.947 [2024-07-27 02:32:01.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.947 [2024-07-27 02:32:01.097463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.109983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e3498 00:32:33.205 [2024-07-27 02:32:01.111204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.111241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.123636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8618 00:32:33.205 [2024-07-27 02:32:01.124820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.124847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.137134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f96f8 00:32:33.205 [2024-07-27 02:32:01.138287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.138314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.150684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e01f8 00:32:33.205 [2024-07-27 02:32:01.151853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.151880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.164396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f46d0 00:32:33.205 [2024-07-27 02:32:01.165578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.165604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.177601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7c50 00:32:33.205 [2024-07-27 02:32:01.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.178800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.191144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1f80 00:32:33.205 [2024-07-27 02:32:01.192240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.192267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.204714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0ea0 00:32:33.205 [2024-07-27 02:32:01.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.205945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.218538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0350 00:32:33.205 [2024-07-27 02:32:01.219701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.219728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.232184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ef270 00:32:33.205 [2024-07-27 02:32:01.233332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.233359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.245741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ee190 00:32:33.205 [2024-07-27 02:32:01.246894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.246924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.259240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f35f0 00:32:33.205 [2024-07-27 02:32:01.260380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.260406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.272893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f2510 00:32:33.205 [2024-07-27 02:32:01.274079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.274107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.286435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f1430 00:32:33.205 [2024-07-27 02:32:01.287623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.287652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.300038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5be8 00:32:33.205 [2024-07-27 02:32:01.301263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.301289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.313506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e8d30 00:32:33.205 [2024-07-27 02:32:01.314679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.314705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.327145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9e10 00:32:33.205 [2024-07-27 02:32:01.328292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.328319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.340682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e38d0 00:32:33.205 [2024-07-27 02:32:01.341838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.341865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.205 [2024-07-27 02:32:01.354259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8a50 00:32:33.205 [2024-07-27 02:32:01.355382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.205 [2024-07-27 02:32:01.355408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.367918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190df550 00:32:33.463 [2024-07-27 02:32:01.369142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.369174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.381541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5378 00:32:33.463 [2024-07-27 02:32:01.382710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.382739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.395029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f4298 00:32:33.463 [2024-07-27 02:32:01.396245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.408613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7818 00:32:33.463 [2024-07-27 02:32:01.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.409836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.422348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1b48 00:32:33.463 [2024-07-27 02:32:01.423544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.423570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.436006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0a68 00:32:33.463 [2024-07-27 02:32:01.437210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.437242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.449671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eff18 00:32:33.463 [2024-07-27 02:32:01.450830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.450857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.463548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eee38 00:32:33.463 [2024-07-27 02:32:01.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.464778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.477135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190edd58 00:32:33.463 [2024-07-27 02:32:01.478304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.478331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.490863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f31b8 00:32:33.463 [2024-07-27 02:32:01.492065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.492091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.504345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f20d8 00:32:33.463 [2024-07-27 02:32:01.505520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.505546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.517896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0ff8 00:32:33.463 [2024-07-27 02:32:01.519097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.519127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.531710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f6020 00:32:33.463 [2024-07-27 02:32:01.532888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.532915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.545390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9168 00:32:33.463 [2024-07-27 02:32:01.546561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.558890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e3498 00:32:33.463 [2024-07-27 02:32:01.560075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.560104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.572511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8618 00:32:33.463 [2024-07-27 02:32:01.573681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.573706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.586078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f96f8 00:32:33.463 [2024-07-27 02:32:01.587300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.587327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.599821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e01f8 00:32:33.463 [2024-07-27 02:32:01.600990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.601016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.463 [2024-07-27 02:32:01.613378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f46d0 00:32:33.463 [2024-07-27 02:32:01.614570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.463 [2024-07-27 02:32:01.614595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.627014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7c50 00:32:33.721 [2024-07-27 02:32:01.628255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.628284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.640693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1f80 00:32:33.721 [2024-07-27 02:32:01.641912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.641943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.654328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0ea0 00:32:33.721 [2024-07-27 02:32:01.655537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.655572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.668014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0350 00:32:33.721 [2024-07-27 02:32:01.669225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.669253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.681877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ef270 00:32:33.721 [2024-07-27 02:32:01.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.683138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.695502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ee190 00:32:33.721 [2024-07-27 02:32:01.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.696731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.709096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f35f0 00:32:33.721 [2024-07-27 02:32:01.710290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.710316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.722890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f2510 00:32:33.721 [2024-07-27 02:32:01.724121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.724161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.736556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f1430 00:32:33.721 [2024-07-27 02:32:01.737759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.737787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.750263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5be8 00:32:33.721 [2024-07-27 02:32:01.751437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.763839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e8d30 00:32:33.721 [2024-07-27 02:32:01.765049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.777576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9e10 00:32:33.721 [2024-07-27 02:32:01.778793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.778819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.721 [2024-07-27 02:32:01.791466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e38d0 00:32:33.721 [2024-07-27 02:32:01.792652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.721 [2024-07-27 02:32:01.792677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.805190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8a50 00:32:33.722 [2024-07-27 02:32:01.806387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.818877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190df550 00:32:33.722 [2024-07-27 02:32:01.820089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.820122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.832637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f5378 00:32:33.722 [2024-07-27 02:32:01.833853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.833878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.846302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f4298 00:32:33.722 [2024-07-27 02:32:01.847512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.847538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.859936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7818 00:32:33.722 [2024-07-27 02:32:01.861145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.861174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.722 [2024-07-27 02:32:01.873542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1b48 00:32:33.722 [2024-07-27 02:32:01.874731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.722 [2024-07-27 02:32:01.874756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.887115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0a68 00:32:33.980 [2024-07-27 02:32:01.888326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.888365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.900707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eff18 00:32:33.980 [2024-07-27 02:32:01.901896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.901921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.914347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190eee38 00:32:33.980 [2024-07-27 02:32:01.915579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.927905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190edd58 00:32:33.980 [2024-07-27 02:32:01.929135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.929161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.941635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f31b8 00:32:33.980 [2024-07-27 02:32:01.942826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.942852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.955233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f20d8 00:32:33.980 [2024-07-27 02:32:01.956424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.956450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.968924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0ff8 00:32:33.980 [2024-07-27 02:32:01.970132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.970161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.982550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f6020 00:32:33.980 [2024-07-27 02:32:01.983761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.983792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:01.996160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e9168 00:32:33.980 [2024-07-27 02:32:01.997362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:01.997388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.009814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e3498 00:32:33.980 [2024-07-27 02:32:02.011022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:02.011049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.023284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f8618 00:32:33.980 [2024-07-27 02:32:02.024496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:02.024522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.036967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f96f8 00:32:33.980 [2024-07-27 02:32:02.038187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:02.038217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.050579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e01f8 00:32:33.980 [2024-07-27 02:32:02.051783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:02.051809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.064248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f46d0 00:32:33.980 [2024-07-27 02:32:02.065430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.980 [2024-07-27 02:32:02.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.980 [2024-07-27 02:32:02.077910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e7c50 00:32:33.981 [2024-07-27 02:32:02.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.981 [2024-07-27 02:32:02.079151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.981 [2024-07-27 02:32:02.091732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e1f80 00:32:33.981 [2024-07-27 02:32:02.092953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.981 [2024-07-27 02:32:02.092980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.981 [2024-07-27 02:32:02.105293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190e0ea0 00:32:33.981 [2024-07-27 02:32:02.106503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.981 [2024-07-27 02:32:02.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.981 [2024-07-27 02:32:02.119138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f0350 00:32:33.981 [2024-07-27 02:32:02.120332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.981 [2024-07-27 02:32:02.120375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:33.981 [2024-07-27 02:32:02.133009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ef270 00:32:33.981 [2024-07-27 02:32:02.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.981 [2024-07-27 02:32:02.134252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:34.239 [2024-07-27 02:32:02.146822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190ee190 00:32:34.239 [2024-07-27 02:32:02.148053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.239 [2024-07-27 02:32:02.148111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:34.239 [2024-07-27 02:32:02.160556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2447940) with pdu=0x2000190f35f0 00:32:34.239 [2024-07-27 02:32:02.161766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.239 [2024-07-27 02:32:02.161793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:34.239 00:32:34.239 Latency(us) 00:32:34.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.239 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.239 nvme0n1 : 2.00 18659.43 72.89 0.00 0.00 6848.55 2997.67 14660.65 00:32:34.239 =================================================================================================================== 00:32:34.239 Total : 18659.43 72.89 0.00 0.00 6848.55 2997.67 14660.65 00:32:34.239 0 00:32:34.239 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:34.239 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:34.239 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:34.239 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:34.239 | .driver_specific 00:32:34.239 | .nvme_error 00:32:34.239 | .status_code 00:32:34.239 | .command_transient_transport_error' 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1181036 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1181036 ']' 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1181036 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1181036 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1181036' 00:32:34.497 killing process with pid 1181036 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1181036 00:32:34.497 Received shutdown signal, test time was about 2.000000 seconds 00:32:34.497 00:32:34.497 Latency(us) 00:32:34.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.497 =================================================================================================================== 00:32:34.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:34.497 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1181036 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1181446 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1181446 /var/tmp/bperf.sock 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1181446 ']' 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:34.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.754 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:34.754 [2024-07-27 02:32:02.738453] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:34.754 [2024-07-27 02:32:02.738542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181446 ] 00:32:34.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:34.754 Zero copy mechanism will not be used. 00:32:34.754 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.754 [2024-07-27 02:32:02.769268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:34.754 [2024-07-27 02:32:02.800479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.754 [2024-07-27 02:32:02.888577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.012 02:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.012 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:35.012 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:35.012 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:35.274 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:35.274 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.274 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.274 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.275 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.275 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.840 nvme0n1 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:35.840 02:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.840 Zero copy mechanism will not be used. 00:32:35.840 Running I/O for 2 seconds... 00:32:35.840 [2024-07-27 02:32:03.893855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.894284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.894323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.840 [2024-07-27 02:32:03.913466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.913932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.913962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.840 [2024-07-27 02:32:03.932598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.932979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.840 [2024-07-27 02:32:03.949922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.950358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.950403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.840 [2024-07-27 02:32:03.968899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.969273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.969304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.840 [2024-07-27 02:32:03.987915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:35.840 [2024-07-27 02:32:03.988348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.840 [2024-07-27 02:32:03.988379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.005865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.006250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.006295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.024366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.024770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.043307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.043848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.062531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.062782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.062812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.081649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.082030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.082080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.101148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.101624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.101667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.119723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.120118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.120162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.135648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.136029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.136065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.153817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.154322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.154352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.172968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.173431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.173461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.192289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.192650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.192679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.211015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.211488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.211526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.229716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.230109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.230154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.099 [2024-07-27 02:32:04.246023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.099 [2024-07-27 02:32:04.246468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.099 [2024-07-27 02:32:04.246498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.357 [2024-07-27 02:32:04.265030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.357 [2024-07-27 02:32:04.265503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-07-27 02:32:04.265548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.357 [2024-07-27 02:32:04.283668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.357 [2024-07-27 02:32:04.284123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-07-27 02:32:04.284153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.357 [2024-07-27 02:32:04.301910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.357 [2024-07-27 02:32:04.302367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-07-27 02:32:04.302396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.357 [2024-07-27 02:32:04.320360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.357 [2024-07-27 02:32:04.320744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-07-27 02:32:04.320773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.357 [2024-07-27 02:32:04.338570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.357 [2024-07-27 02:32:04.339107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.339151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.357807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.358233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.358267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.374538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.374919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.374950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.393864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.394397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.394441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.411127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.411617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.411646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.430445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.430951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.449022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.449408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.449439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.467585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.467962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.467993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.486334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.486790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.486835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.358 [2024-07-27 02:32:04.505231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.358 [2024-07-27 02:32:04.505685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-07-27 02:32:04.505714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.615 [2024-07-27 02:32:04.523700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.615 [2024-07-27 02:32:04.524152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.615 [2024-07-27 02:32:04.524192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.615 [2024-07-27 02:32:04.541987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.542532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.542579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.560751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.561302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.561346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.579621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.580174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.580218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.600379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.600849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.600876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.619765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.620268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.638182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.638721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.638749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.655725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.656134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.656177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.674494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.674963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.674992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.694027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.694425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.712835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.713388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.713436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.731646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.732030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.749463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.749924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.749952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.616 [2024-07-27 02:32:04.767706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.616 [2024-07-27 02:32:04.768289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.616 [2024-07-27 02:32:04.768331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.873 [2024-07-27 02:32:04.790101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.873 [2024-07-27 02:32:04.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.873 [2024-07-27 02:32:04.790556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.873 [2024-07-27 02:32:04.808587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.873 [2024-07-27 02:32:04.808938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.808967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.829202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.829739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.829783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.848614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.849234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.849265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.866132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.866507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.866551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.885933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.886313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.886343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.904489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.904937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.904965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.923070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.923566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.923610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.943457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.943959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.963565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.963955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.963998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:04.982258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:04.982721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:04.982748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:05.000740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:05.001145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:05.001188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.874 [2024-07-27 02:32:05.018810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:36.874 [2024-07-27 02:32:05.019311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.874 [2024-07-27 02:32:05.019358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.036610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.036801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.036830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.055498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.056082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.075121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.075640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.075667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.094822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.095251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.095280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.114277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.114675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.114718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.133035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.133504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.133546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.152431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.152912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.152939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.174119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.174506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.174535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.192804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.193335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.193378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.211713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.212270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.212314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.231676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.232098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.232143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.250408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.250854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.250882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.266400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.266914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.132 [2024-07-27 02:32:05.281638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.132 [2024-07-27 02:32:05.282172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.132 [2024-07-27 02:32:05.282215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.390 [2024-07-27 02:32:05.301589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.390 [2024-07-27 02:32:05.302020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.390 [2024-07-27 02:32:05.302047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.390 [2024-07-27 02:32:05.320626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.390 [2024-07-27 02:32:05.320991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.390 [2024-07-27 02:32:05.321019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.390 [2024-07-27 02:32:05.339679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.390 [2024-07-27 02:32:05.340225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.390 [2024-07-27 02:32:05.340254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.390 [2024-07-27 02:32:05.359845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.390 [2024-07-27 02:32:05.360320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.390 [2024-07-27 02:32:05.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.390 [2024-07-27 02:32:05.379458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.390 [2024-07-27 02:32:05.379964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.379991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.399034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.399431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.399473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.418314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.418706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.418734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.435841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.436395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.436441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.455703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.456125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.456168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.474394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.474718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.474761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.492937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.493353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.493397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.511610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.512008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.512056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.529674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.530136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.530164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.391 [2024-07-27 02:32:05.547734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.391 [2024-07-27 02:32:05.548230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.391 [2024-07-27 02:32:05.548259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.566295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.566799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.566827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.584597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.585157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.585200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.601552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.602214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.602257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.620362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.620729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.620756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.637915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.638468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.638513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.656825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.657246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.657289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.676246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.676804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.676831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.695395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.695786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.695829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.712093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.712546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.712587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.731211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.649 [2024-07-27 02:32:05.731710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.649 [2024-07-27 02:32:05.731738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.649 [2024-07-27 02:32:05.748971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.650 [2024-07-27 02:32:05.749596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.650 [2024-07-27 02:32:05.749625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.650 [2024-07-27 02:32:05.768364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.650 [2024-07-27 02:32:05.768735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.650 [2024-07-27 02:32:05.768776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.650 [2024-07-27 02:32:05.788486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.650 [2024-07-27 02:32:05.788930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.650 [2024-07-27 02:32:05.788957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.650 [2024-07-27 02:32:05.807711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.650 [2024-07-27 02:32:05.808106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.650 [2024-07-27 02:32:05.808135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.908 [2024-07-27 02:32:05.827419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.908 [2024-07-27 02:32:05.827806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.908 [2024-07-27 02:32:05.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.908 [2024-07-27 02:32:05.845867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.908 [2024-07-27 02:32:05.846278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.908 [2024-07-27 02:32:05.846321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.908 [2024-07-27 02:32:05.867170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24495c0) with pdu=0x2000190fef90 00:32:37.908 [2024-07-27 02:32:05.867674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.908 [2024-07-27 02:32:05.867701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.908 00:32:37.908 Latency(us) 00:32:37.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.908 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:37.908 nvme0n1 : 2.01 1649.44 206.18 0.00 0.00 9673.24 6602.15 22233.69 00:32:37.908 =================================================================================================================== 00:32:37.908 Total : 1649.44 206.18 0.00 0.00 9673.24 6602.15 22233.69 00:32:37.908 0 00:32:37.908 02:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:37.908 02:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:37.908 02:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:37.908 02:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:37.908 | .driver_specific 00:32:37.908 | .nvme_error 00:32:37.908 | .status_code 00:32:37.908 | .command_transient_transport_error' 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 106 > 0 )) 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1181446 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1181446 ']' 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1181446 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1181446 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1181446' 00:32:38.167 killing process with pid 1181446 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1181446 00:32:38.167 Received shutdown signal, test time was about 2.000000 seconds 00:32:38.167 00:32:38.167 Latency(us) 00:32:38.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.167 =================================================================================================================== 00:32:38.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.167 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1181446 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1180076 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1180076 ']' 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1180076 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1180076 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1180076' 00:32:38.426 killing process with pid 1180076 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1180076 00:32:38.426 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1180076 00:32:38.684 00:32:38.684 real 0m15.241s 00:32:38.684 user 0m30.800s 00:32:38.684 sys 0m3.853s 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.684 ************************************ 00:32:38.684 END TEST nvmf_digest_error 00:32:38.684 ************************************ 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:38.684 rmmod nvme_tcp 00:32:38.684 rmmod nvme_fabrics 00:32:38.684 rmmod nvme_keyring 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1180076 ']' 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1180076 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1180076 ']' 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1180076 00:32:38.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1180076) - No such process 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1180076 is not found' 00:32:38.684 Process with pid 1180076 is not found 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.684 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:38.685 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.685 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.685 02:32:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.601 02:32:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:40.601 00:32:40.601 real 0m34.911s 00:32:40.601 user 1m1.528s 00:32:40.601 sys 0m9.560s 00:32:40.601 02:32:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:40.601 02:32:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.601 ************************************ 00:32:40.601 END TEST nvmf_digest 00:32:40.601 ************************************ 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.923 ************************************ 00:32:40.923 START TEST nvmf_bdevperf 00:32:40.923 ************************************ 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:40.923 * Looking for test storage... 00:32:40.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:40.923 02:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:42.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:42.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:42.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:42.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.836 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:42.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:42.836 00:32:42.836 --- 10.0.0.2 ping statistics --- 00:32:42.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.836 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:32:42.837 00:32:42.837 --- 10.0.0.1 ping statistics --- 00:32:42.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.837 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1183795 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1183795 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1183795 ']' 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:42.837 02:32:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:42.837 [2024-07-27 02:32:10.933491] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:42.837 [2024-07-27 02:32:10.933563] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.837 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.837 [2024-07-27 02:32:10.970386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:43.095 [2024-07-27 02:32:11.000733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:43.095 [2024-07-27 02:32:11.090992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.095 [2024-07-27 02:32:11.091056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.095 [2024-07-27 02:32:11.091080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.095 [2024-07-27 02:32:11.091094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.095 [2024-07-27 02:32:11.091105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.095 [2024-07-27 02:32:11.091212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.095 [2024-07-27 02:32:11.091550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.095 [2024-07-27 02:32:11.091555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.095 [2024-07-27 02:32:11.232435] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.095 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.353 Malloc0 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.353 [2024-07-27 02:32:11.293429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.353 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.353 { 00:32:43.353 "params": { 00:32:43.353 "name": "Nvme$subsystem", 00:32:43.353 "trtype": "$TEST_TRANSPORT", 00:32:43.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.354 "adrfam": "ipv4", 00:32:43.354 "trsvcid": "$NVMF_PORT", 00:32:43.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.354 "hdgst": ${hdgst:-false}, 00:32:43.354 "ddgst": ${ddgst:-false} 00:32:43.354 }, 00:32:43.354 "method": "bdev_nvme_attach_controller" 00:32:43.354 } 00:32:43.354 EOF 00:32:43.354 )") 00:32:43.354 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:43.354 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:43.354 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:43.354 02:32:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:43.354 "params": { 00:32:43.354 "name": "Nvme1", 00:32:43.354 "trtype": "tcp", 00:32:43.354 "traddr": "10.0.0.2", 00:32:43.354 "adrfam": "ipv4", 00:32:43.354 "trsvcid": "4420", 00:32:43.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.354 "hdgst": false, 00:32:43.354 "ddgst": false 00:32:43.354 }, 00:32:43.354 "method": "bdev_nvme_attach_controller" 00:32:43.354 }' 00:32:43.354 [2024-07-27 02:32:11.342012] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:43.354 [2024-07-27 02:32:11.342106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183934 ] 00:32:43.354 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.354 [2024-07-27 02:32:11.373715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:43.354 [2024-07-27 02:32:11.402099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.354 [2024-07-27 02:32:11.492127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.613 Running I/O for 1 seconds... 00:32:44.986 00:32:44.986 Latency(us) 00:32:44.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:44.986 Verification LBA range: start 0x0 length 0x4000 00:32:44.986 Nvme1n1 : 1.01 8230.20 32.15 0.00 0.00 15458.85 2803.48 16602.45 00:32:44.986 =================================================================================================================== 00:32:44.986 Total : 8230.20 32.15 0.00 0.00 15458.85 2803.48 16602.45 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1184081 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:44.986 { 00:32:44.986 "params": { 00:32:44.986 "name": "Nvme$subsystem", 00:32:44.986 "trtype": "$TEST_TRANSPORT", 00:32:44.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.986 "adrfam": "ipv4", 00:32:44.986 "trsvcid": "$NVMF_PORT", 00:32:44.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.986 "hdgst": ${hdgst:-false}, 00:32:44.986 "ddgst": ${ddgst:-false} 00:32:44.986 }, 00:32:44.986 "method": "bdev_nvme_attach_controller" 00:32:44.986 } 00:32:44.986 EOF 00:32:44.986 )") 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:44.986 02:32:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:44.986 "params": { 00:32:44.986 "name": "Nvme1", 00:32:44.986 "trtype": "tcp", 00:32:44.986 "traddr": "10.0.0.2", 00:32:44.986 "adrfam": "ipv4", 00:32:44.986 "trsvcid": "4420", 00:32:44.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:44.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:44.986 "hdgst": false, 00:32:44.986 "ddgst": false 00:32:44.986 }, 00:32:44.986 "method": "bdev_nvme_attach_controller" 00:32:44.986 }' 00:32:44.986 [2024-07-27 02:32:12.968874] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:44.986 [2024-07-27 02:32:12.968948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184081 ] 00:32:44.986 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.986 [2024-07-27 02:32:13.001889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.986 [2024-07-27 02:32:13.030456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.986 [2024-07-27 02:32:13.115485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.254 Running I/O for 15 seconds... 00:32:47.799 02:32:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1183795 00:32:47.799 02:32:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:47.799 [2024-07-27 02:32:15.942529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.942979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.942998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.799 [2024-07-27 02:32:15.943429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.799 [2024-07-27 02:32:15.943787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.799 [2024-07-27 02:32:15.943804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.943839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.943872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.943906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.943974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.943991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.800 [2024-07-27 02:32:15.944850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.944973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.944988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.945006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.945022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.945039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.945055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.800 [2024-07-27 02:32:15.945082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.800 [2024-07-27 02:32:15.945121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.801 [2024-07-27 02:32:15.946368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.801 [2024-07-27 02:32:15.946382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.946967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.946984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.947000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.947033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.802 [2024-07-27 02:32:15.947075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195be60 is same with the state(5) to be set 00:32:47.802 [2024-07-27 02:32:15.947133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.802 [2024-07-27 02:32:15.947145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.802 [2024-07-27 02:32:15.947157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39280 len:8 PRP1 0x0 PRP2 0x0 00:32:47.802 [2024-07-27 02:32:15.947171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947240] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x195be60 was disconnected and freed. reset controller. 00:32:47.802 [2024-07-27 02:32:15.947316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.802 [2024-07-27 02:32:15.947339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.802 [2024-07-27 02:32:15.947387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.802 [2024-07-27 02:32:15.947418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.802 [2024-07-27 02:32:15.947448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.802 [2024-07-27 02:32:15.947463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:47.802 [2024-07-27 02:32:15.951314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.802 [2024-07-27 02:32:15.951371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:47.802 [2024-07-27 02:32:15.952172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.802 [2024-07-27 02:32:15.952203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:47.802 [2024-07-27 02:32:15.952220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:47.802 [2024-07-27 02:32:15.952462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:47.802 [2024-07-27 02:32:15.952707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.802 [2024-07-27 02:32:15.952733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.802 [2024-07-27 02:32:15.952751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.802 [2024-07-27 02:32:15.956343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:15.965599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:15.966095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:15.966128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:15.966147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:15.966387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:15.966631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:15.966656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:15.966672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:15.970247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:15.979510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:15.979990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:15.980021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:15.980039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:15.980294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:15.980540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:15.980565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:15.980581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:15.984157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:15.993417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:15.993900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:15.993931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:15.993949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:15.994198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:15.994443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:15.994468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:15.994484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:15.998049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:16.007321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:16.007768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:16.007799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:16.007818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:16.008056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:16.008311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:16.008335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:16.008351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:16.011914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:16.021176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:16.021650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:16.021681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:16.021705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:16.021944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:16.022200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:16.022225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:16.022241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:16.025803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:16.035079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:16.035552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:16.035584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:16.035602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:16.035841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:16.036096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:16.036122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:16.036138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:16.039701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:16.048953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:16.049426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:16.049457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:16.049475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.064 [2024-07-27 02:32:16.049714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.064 [2024-07-27 02:32:16.049957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.064 [2024-07-27 02:32:16.049982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.064 [2024-07-27 02:32:16.049998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.064 [2024-07-27 02:32:16.053570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.064 [2024-07-27 02:32:16.062822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.064 [2024-07-27 02:32:16.063283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-07-27 02:32:16.063324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.064 [2024-07-27 02:32:16.063341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.063576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.063835] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.063865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.063882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.067458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.076718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.077178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.077210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.077228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.077467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.077710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.077735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.077750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.081329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.090585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.091025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.091056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.091086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.091326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.091570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.091595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.091610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.095189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.104453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.104915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.104945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.104963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.105212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.105456] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.105481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.105497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.109065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.118329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.118806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.118837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.118855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.119104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.119348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.119373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.119390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.122953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.132225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.132688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.132719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.132737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.132976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.133231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.133256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.133272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.136845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.146163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.146638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.146669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.146687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.146926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.147185] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.147210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.147226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.150792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.160089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.160538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.160569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.160587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.160831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.161090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.161115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.161131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.164693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.173963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.174433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.174460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.174492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.174745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.174989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.175014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.065 [2024-07-27 02:32:16.175030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.065 [2024-07-27 02:32:16.178599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.065 [2024-07-27 02:32:16.187855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.065 [2024-07-27 02:32:16.188322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-07-27 02:32:16.188354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.065 [2024-07-27 02:32:16.188372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.065 [2024-07-27 02:32:16.188611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.065 [2024-07-27 02:32:16.188855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.065 [2024-07-27 02:32:16.188879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.066 [2024-07-27 02:32:16.188895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.066 [2024-07-27 02:32:16.192463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.066 [2024-07-27 02:32:16.201708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.066 [2024-07-27 02:32:16.202140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-07-27 02:32:16.202173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.066 [2024-07-27 02:32:16.202191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.066 [2024-07-27 02:32:16.202430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.066 [2024-07-27 02:32:16.202674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.066 [2024-07-27 02:32:16.202698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.066 [2024-07-27 02:32:16.202719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.066 [2024-07-27 02:32:16.206293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.066 [2024-07-27 02:32:16.215543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.066 [2024-07-27 02:32:16.215957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-07-27 02:32:16.215988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.066 [2024-07-27 02:32:16.216006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.066 [2024-07-27 02:32:16.216264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.066 [2024-07-27 02:32:16.216509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.066 [2024-07-27 02:32:16.216533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.066 [2024-07-27 02:32:16.216549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.066 [2024-07-27 02:32:16.220120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.327 [2024-07-27 02:32:16.229392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.327 [2024-07-27 02:32:16.229859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.327 [2024-07-27 02:32:16.229890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.327 [2024-07-27 02:32:16.229908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.327 [2024-07-27 02:32:16.230158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.327 [2024-07-27 02:32:16.230402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.327 [2024-07-27 02:32:16.230427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.327 [2024-07-27 02:32:16.230443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.234005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.243291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.243764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.243805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.243821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.244078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.244323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.244347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.244364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.247934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.257227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.257713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.257760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.257777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.258036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.258290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.258315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.258331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.261897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.271273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.271744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.271776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.271795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.272034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.272287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.272312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.272327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.275900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.285187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.285671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.285698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.285729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.285988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.286246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.286271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.286287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.289853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.299119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.299560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.299591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.299609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.299848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.300110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.300136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.300152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.303714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.312972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.313414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.313440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.313456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.313718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.313962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.313986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.314002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.317574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.326823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.327290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.327330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.327347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.327601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.327846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.327870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.327886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.331459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.340725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.341200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.341241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.341258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.341518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.341763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.341787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.341803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.345382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.354634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.355074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.328 [2024-07-27 02:32:16.355106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.328 [2024-07-27 02:32:16.355124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.328 [2024-07-27 02:32:16.355362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.328 [2024-07-27 02:32:16.355605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.328 [2024-07-27 02:32:16.355629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.328 [2024-07-27 02:32:16.355646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.328 [2024-07-27 02:32:16.359216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.328 [2024-07-27 02:32:16.368467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.328 [2024-07-27 02:32:16.368928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.368959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.368978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.369227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.369471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.369496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.369511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.373078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.382333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.382788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.382819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.382837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.383086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.383330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.383355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.383370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.386928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.396183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.396700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.396741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.396763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.397028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.397282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.397307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.397323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.400884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.410165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.410599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.410631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.410648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.410887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.411142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.411167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.411183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.414729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.424193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.424720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.424770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.424788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.425026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.425270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.425293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.425307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.428876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.438143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.438656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.438684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.438700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.438945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.439203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.439230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.439245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.442823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.452118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.452604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.452654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.452672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.452910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.453182] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.453204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.453218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.456793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.466107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.466574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.466618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.466871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.467129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.467154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.467170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.470655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.329 [2024-07-27 02:32:16.479529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.329 [2024-07-27 02:32:16.479948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.329 [2024-07-27 02:32:16.479975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.329 [2024-07-27 02:32:16.480006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.329 [2024-07-27 02:32:16.480250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.329 [2024-07-27 02:32:16.480463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.329 [2024-07-27 02:32:16.480485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.329 [2024-07-27 02:32:16.480498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.329 [2024-07-27 02:32:16.483575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.591 [2024-07-27 02:32:16.492827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.591 [2024-07-27 02:32:16.493329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.591 [2024-07-27 02:32:16.493357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.591 [2024-07-27 02:32:16.493373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.591 [2024-07-27 02:32:16.493614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.591 [2024-07-27 02:32:16.493829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.591 [2024-07-27 02:32:16.493849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.591 [2024-07-27 02:32:16.493862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.591 [2024-07-27 02:32:16.496903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.591 [2024-07-27 02:32:16.506002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.591 [2024-07-27 02:32:16.506444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.591 [2024-07-27 02:32:16.506472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.591 [2024-07-27 02:32:16.506488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.591 [2024-07-27 02:32:16.506744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.591 [2024-07-27 02:32:16.506944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.591 [2024-07-27 02:32:16.506964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.591 [2024-07-27 02:32:16.506977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.591 [2024-07-27 02:32:16.509974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.591 [2024-07-27 02:32:16.519252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.591 [2024-07-27 02:32:16.519673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.591 [2024-07-27 02:32:16.519700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.591 [2024-07-27 02:32:16.519715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.591 [2024-07-27 02:32:16.519954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.591 [2024-07-27 02:32:16.520217] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.591 [2024-07-27 02:32:16.520239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.591 [2024-07-27 02:32:16.520253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.591 [2024-07-27 02:32:16.523259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.591 [2024-07-27 02:32:16.532552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.591 [2024-07-27 02:32:16.532959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.591 [2024-07-27 02:32:16.532987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.591 [2024-07-27 02:32:16.533008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.591 [2024-07-27 02:32:16.533235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.591 [2024-07-27 02:32:16.533482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.591 [2024-07-27 02:32:16.533503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.591 [2024-07-27 02:32:16.533516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.591 [2024-07-27 02:32:16.536501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.591 [2024-07-27 02:32:16.545786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.591 [2024-07-27 02:32:16.546242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.591 [2024-07-27 02:32:16.546270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.591 [2024-07-27 02:32:16.546286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.591 [2024-07-27 02:32:16.546527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.591 [2024-07-27 02:32:16.546726] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.591 [2024-07-27 02:32:16.546746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.546760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.549744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.559017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.559502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.559529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.559560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.559797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.559997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.560017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.560030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.563036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.572295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.572814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.572842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.572858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.573125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.573339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.573364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.573395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.576383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.585601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.586055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.586103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.586120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.586363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.586579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.586599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.586612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.589648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.598906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.599324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.599352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.599367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.599601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.599801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.599820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.599834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.602816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.612042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.612460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.612488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.612505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.612745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.612945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.612965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.612977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.615991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.625210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.625676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.625704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.625720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.625974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.626221] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.626243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.626257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.629246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.638542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.638986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.639014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.639030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.639255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.639494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.639515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.639527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.642497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.651763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.652353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.652405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.652423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.652644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.652845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.652866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.652879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.655981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.592 [2024-07-27 02:32:16.664955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.592 [2024-07-27 02:32:16.665459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.592 [2024-07-27 02:32:16.665487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.592 [2024-07-27 02:32:16.665520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.592 [2024-07-27 02:32:16.665782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.592 [2024-07-27 02:32:16.665983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.592 [2024-07-27 02:32:16.666003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.592 [2024-07-27 02:32:16.666016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.592 [2024-07-27 02:32:16.669018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.678156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.678592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.678621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.678637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.678871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.679096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.679118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.679131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.682162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.691494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.692014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.692043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.692068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.692302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.692538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.692558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.692571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.695551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.704806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.705243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.705272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.705288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.705533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.705750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.705770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.705789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.708882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.718216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.718701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.718729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.718745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.719005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.719227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.719249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.719263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.722293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.731406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.731861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.731904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.731921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.732160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.732387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.732408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.732421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.735400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.593 [2024-07-27 02:32:16.744693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.593 [2024-07-27 02:32:16.745138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.593 [2024-07-27 02:32:16.745167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.593 [2024-07-27 02:32:16.745183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.593 [2024-07-27 02:32:16.745413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.593 [2024-07-27 02:32:16.745647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.593 [2024-07-27 02:32:16.745668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.593 [2024-07-27 02:32:16.745681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.593 [2024-07-27 02:32:16.748772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.853 [2024-07-27 02:32:16.758182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.853 [2024-07-27 02:32:16.758625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.853 [2024-07-27 02:32:16.758658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.853 [2024-07-27 02:32:16.758690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.853 [2024-07-27 02:32:16.758945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.853 [2024-07-27 02:32:16.759193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.853 [2024-07-27 02:32:16.759216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.853 [2024-07-27 02:32:16.759230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.853 [2024-07-27 02:32:16.762202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.853 [2024-07-27 02:32:16.771474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.853 [2024-07-27 02:32:16.771914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.853 [2024-07-27 02:32:16.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.771958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.772227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.772453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.772474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.772487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.775470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.784813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.785263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.785291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.785307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.785562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.785763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.785783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.785796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.788872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.798162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.798625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.798653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.798670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.798923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.799174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.799197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.799211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.802212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.811429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.811944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.811986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.812003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.812252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.812472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.812492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.812505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.815506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.824634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.825084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.825127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.825144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.825386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.825602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.825622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.825635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.828658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.837980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.838404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.838446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.838462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.838731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.838931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.838951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.838964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.841969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.851264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.851671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.851711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.851727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.851963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.852226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.852248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.852262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.855299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.864590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.865093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.865122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.865138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.865379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.865595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.865615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.865628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.868647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.877953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.878393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.878422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.878438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.878672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.878872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.854 [2024-07-27 02:32:16.878893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.854 [2024-07-27 02:32:16.878906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.854 [2024-07-27 02:32:16.881918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.854 [2024-07-27 02:32:16.891337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.854 [2024-07-27 02:32:16.891794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.854 [2024-07-27 02:32:16.891821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.854 [2024-07-27 02:32:16.891859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.854 [2024-07-27 02:32:16.892108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.854 [2024-07-27 02:32:16.892315] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.892336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.892350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.895339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.904732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.905307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.905345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.905361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.905611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.905810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.905830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.905843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.908894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.918445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.918927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.918955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.918972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.919195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.919453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.919473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.919486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.922465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.931634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.932129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.932158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.932175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.932417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.932632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.932656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.932669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.935649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.944970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.945417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.945446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.945462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.945719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.945919] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.945939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.945952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.949027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.958387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.958824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.958852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.958868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.959133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.959361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.959382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.959396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.962611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.971876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.972305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.972333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.972364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.972603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.972804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.972824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.972837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.975823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.985225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.985748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.985777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.985793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.986048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.986282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.986304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.986318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:16.989348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:16.998532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:16.998995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:16.999037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:16.999054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:16.999291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:16.999527] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:16.999548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:16.999561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.855 [2024-07-27 02:32:17.002616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.855 [2024-07-27 02:32:17.011914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.855 [2024-07-27 02:32:17.012369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.855 [2024-07-27 02:32:17.012398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:48.855 [2024-07-27 02:32:17.012415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:48.855 [2024-07-27 02:32:17.012655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:48.855 [2024-07-27 02:32:17.012861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.855 [2024-07-27 02:32:17.012882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.855 [2024-07-27 02:32:17.012895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.015962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.025267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.025789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.025817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.025833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.026108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.026321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.026343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.026372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.029406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.038594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.038986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.039013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.039029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.039275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.039493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.039513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.039527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.042553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.051881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.052290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.052335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.052568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.052768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.052789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.052801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.055779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.065108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.065755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.065791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.065832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.066079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.066308] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.066330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.066350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.069364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.078326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.078795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.078857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.079126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.079340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.079361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.079376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.082370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.091653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.092157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.092186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.092203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.092458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.092658] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.092677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.092690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.095696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.104947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.105417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.105446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.105463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.105716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.105916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.105936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.105949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.108976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.118208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.118649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.118676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.118706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.118942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.119189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.119212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.119226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.122232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.131570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.116 [2024-07-27 02:32:17.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.116 [2024-07-27 02:32:17.132102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.116 [2024-07-27 02:32:17.132119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.116 [2024-07-27 02:32:17.132362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.116 [2024-07-27 02:32:17.132562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.116 [2024-07-27 02:32:17.132581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.116 [2024-07-27 02:32:17.132594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.116 [2024-07-27 02:32:17.135636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.116 [2024-07-27 02:32:17.144762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.145219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.145248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.145264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.145505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.145721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.145741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.145754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.148742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.158012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.158433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.158462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.158493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.158766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.158966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.158987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.159000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.162000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.171316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.171720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.171747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.171762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.171982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.172232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.172254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.172268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.175259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.184557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.185012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.185054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.185079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.185309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.185526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.185546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.185559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.188535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.197802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.198233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.198261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.198277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.198518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.198718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.198738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.198755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.201699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.211066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.211514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.211542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.211558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.211813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.212013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.212033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.212068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.215217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.224472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.224915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.224942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.224958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.225211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.225432] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.225452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.225465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.228609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.237698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.238148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.238190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.238207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.238442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.238642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.238662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.238675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.241659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.250894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.251394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.251427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.251460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.251696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.117 [2024-07-27 02:32:17.251895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.117 [2024-07-27 02:32:17.251915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.117 [2024-07-27 02:32:17.251929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.117 [2024-07-27 02:32:17.254930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.117 [2024-07-27 02:32:17.264284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.117 [2024-07-27 02:32:17.264713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.117 [2024-07-27 02:32:17.264740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.117 [2024-07-27 02:32:17.264756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.117 [2024-07-27 02:32:17.264992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.118 [2024-07-27 02:32:17.265221] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.118 [2024-07-27 02:32:17.265242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.118 [2024-07-27 02:32:17.265255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.118 [2024-07-27 02:32:17.268258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.277749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.278190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.278219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.278235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.278475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.278676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.278696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.278709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.281830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.291030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.291474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.291503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.291519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.291757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.291964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.291984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.291997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.294997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.304311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.304763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.304805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.304822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.305086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.305309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.305329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.305343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.308387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.317538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.318038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.318072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.318090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.318319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.318538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.318559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.318571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.321592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.330780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.331266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.331294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.331310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.331548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.331748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.331767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.331780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.334840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.344074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.344514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.344542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.344559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.344799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.345000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.345020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.345033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.348069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.357396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.357839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.379 [2024-07-27 02:32:17.357867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.379 [2024-07-27 02:32:17.357883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.379 [2024-07-27 02:32:17.358138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.379 [2024-07-27 02:32:17.358373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.379 [2024-07-27 02:32:17.358410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.379 [2024-07-27 02:32:17.358423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.379 [2024-07-27 02:32:17.361438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.379 [2024-07-27 02:32:17.370717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.379 [2024-07-27 02:32:17.371165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.371193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.371210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.371452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.371667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.371688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.371701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.374643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.383916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.384370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.384397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.384433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.384690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.384890] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.384910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.384923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.387931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.397276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.397716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.397742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.397774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.398011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.398262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.398284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.398298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.401301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.410636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.411074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.411122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.411139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.411369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.411586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.411606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.411619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.414657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.424019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.424492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.424519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.424534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.424783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.424983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.425008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.425022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.428106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.437419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.437870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.437911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.437927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.438179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.438413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.438434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.438447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.441450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.450740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.451147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.451176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.451192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.451440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.451640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.451660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.451673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.454661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.463910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.464442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.464469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.464501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.464740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.464941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.464960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.464973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.468211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.477947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.478426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.478457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.478475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.478713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.380 [2024-07-27 02:32:17.478957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.380 [2024-07-27 02:32:17.478982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.380 [2024-07-27 02:32:17.478997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.380 [2024-07-27 02:32:17.482556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.380 [2024-07-27 02:32:17.491775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.380 [2024-07-27 02:32:17.492225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.380 [2024-07-27 02:32:17.492253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.380 [2024-07-27 02:32:17.492269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.380 [2024-07-27 02:32:17.492524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.381 [2024-07-27 02:32:17.492768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.381 [2024-07-27 02:32:17.492792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.381 [2024-07-27 02:32:17.492808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.381 [2024-07-27 02:32:17.496380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.381 [2024-07-27 02:32:17.505629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.381 [2024-07-27 02:32:17.506107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.381 [2024-07-27 02:32:17.506138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.381 [2024-07-27 02:32:17.506156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.381 [2024-07-27 02:32:17.506395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.381 [2024-07-27 02:32:17.506639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.381 [2024-07-27 02:32:17.506663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.381 [2024-07-27 02:32:17.506679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.381 [2024-07-27 02:32:17.510250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.381 [2024-07-27 02:32:17.519513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.381 [2024-07-27 02:32:17.519986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.381 [2024-07-27 02:32:17.520018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.381 [2024-07-27 02:32:17.520036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.381 [2024-07-27 02:32:17.520291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.381 [2024-07-27 02:32:17.520535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.381 [2024-07-27 02:32:17.520560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.381 [2024-07-27 02:32:17.520575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.381 [2024-07-27 02:32:17.524147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.381 [2024-07-27 02:32:17.533401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.381 [2024-07-27 02:32:17.533838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.381 [2024-07-27 02:32:17.533870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.381 [2024-07-27 02:32:17.533887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.381 [2024-07-27 02:32:17.534140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.381 [2024-07-27 02:32:17.534384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.381 [2024-07-27 02:32:17.534409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.381 [2024-07-27 02:32:17.534425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.381 [2024-07-27 02:32:17.537991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.547282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.547752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.547793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.547810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.548080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.548313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.548350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.548367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.551931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.561192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.561630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.561657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.561672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.561901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.562158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.562184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.562205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.565765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.575021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.575474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.575505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.575523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.575762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.576005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.576029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.576045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.579619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.588879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.589322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.589364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.589379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.589608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.589857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.589882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.589898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.593466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.602711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.603175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.603205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.603223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.603462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.603705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.603729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.603746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.607316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.616565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.617038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.617077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.643 [2024-07-27 02:32:17.617097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.643 [2024-07-27 02:32:17.617335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.643 [2024-07-27 02:32:17.617579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.643 [2024-07-27 02:32:17.617603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.643 [2024-07-27 02:32:17.617619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.643 [2024-07-27 02:32:17.621186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.643 [2024-07-27 02:32:17.630439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.643 [2024-07-27 02:32:17.630897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.643 [2024-07-27 02:32:17.630928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.630946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.631196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.631440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.631464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.631480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.635040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.644320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.644759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.644791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.644810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.645048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.645309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.645333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.645349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.648908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.658163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.658644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.658671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.658702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.658960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.659223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.659249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.659265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.662822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.672078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.672558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.672599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.672616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.672873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.673130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.673155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.673171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.676733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.685990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.686434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.686465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.686483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.686722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.686966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.686990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.687006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.690582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.699834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.700321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.700349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.700379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.700635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.700880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.700904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.700920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.704506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.713758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.714197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.714225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.714240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.714481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.714725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.714750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.714765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.718346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.727609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.728212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.728239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.728254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.728512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.728756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.728781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.728797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.732363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.741618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.742055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.742092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.742111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.742349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.742592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.644 [2024-07-27 02:32:17.742616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.644 [2024-07-27 02:32:17.742632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.644 [2024-07-27 02:32:17.746200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.644 [2024-07-27 02:32:17.755453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.644 [2024-07-27 02:32:17.755911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.644 [2024-07-27 02:32:17.755947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.644 [2024-07-27 02:32:17.755966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.644 [2024-07-27 02:32:17.756217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.644 [2024-07-27 02:32:17.756461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.645 [2024-07-27 02:32:17.756485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.645 [2024-07-27 02:32:17.756501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.645 [2024-07-27 02:32:17.760070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.645 [2024-07-27 02:32:17.769322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.645 [2024-07-27 02:32:17.769780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.645 [2024-07-27 02:32:17.769810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.645 [2024-07-27 02:32:17.769828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.645 [2024-07-27 02:32:17.770078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.645 [2024-07-27 02:32:17.770323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.645 [2024-07-27 02:32:17.770347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.645 [2024-07-27 02:32:17.770362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.645 [2024-07-27 02:32:17.773923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.645 [2024-07-27 02:32:17.783190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.645 [2024-07-27 02:32:17.783651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.645 [2024-07-27 02:32:17.783681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.645 [2024-07-27 02:32:17.783699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.645 [2024-07-27 02:32:17.783938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.645 [2024-07-27 02:32:17.784193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.645 [2024-07-27 02:32:17.784219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.645 [2024-07-27 02:32:17.784235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.645 [2024-07-27 02:32:17.787794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.645 [2024-07-27 02:32:17.797046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.645 [2024-07-27 02:32:17.797488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.645 [2024-07-27 02:32:17.797518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.645 [2024-07-27 02:32:17.797535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.645 [2024-07-27 02:32:17.797774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.645 [2024-07-27 02:32:17.798024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.645 [2024-07-27 02:32:17.798048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.645 [2024-07-27 02:32:17.798075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.645 [2024-07-27 02:32:17.801642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.904 [2024-07-27 02:32:17.810909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.904 [2024-07-27 02:32:17.811408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.904 [2024-07-27 02:32:17.811450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.904 [2024-07-27 02:32:17.811467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.904 [2024-07-27 02:32:17.811727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.811972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.811997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.812012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.815590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.824878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.825360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.825409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.825648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.825892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.825916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.825932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.829505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.838764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.839197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.839228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.839246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.839485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.839739] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.839764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.839780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.843369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.852642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.853102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.853134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.853152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.853391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.853634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.853659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.853675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.857251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.866510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.866988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.867019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.867037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.867286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.867530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.867554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.867570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.871139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.880399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.880866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.880923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.881181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.881438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.881463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.881478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.885040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.894301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.894822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.894849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.894870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.895141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.895386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.895410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.895426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.898986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.908246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.908734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.908775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.908791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.909046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.909302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.909327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.909342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.912901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.922167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.922610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.922636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.922652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.922899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.923154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.923179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.923195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.905 [2024-07-27 02:32:17.926755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.905 [2024-07-27 02:32:17.936007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.905 [2024-07-27 02:32:17.936468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.905 [2024-07-27 02:32:17.936496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.905 [2024-07-27 02:32:17.936527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.905 [2024-07-27 02:32:17.936775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.905 [2024-07-27 02:32:17.937019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.905 [2024-07-27 02:32:17.937049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.905 [2024-07-27 02:32:17.937076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:17.940656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:17.949919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:17.950375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:17.950404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:17.950420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:17.950670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:17.950915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:17.950939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:17.950955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:17.954528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:17.963778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:17.964221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:17.964253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:17.964271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:17.964510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:17.964754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:17.964778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:17.964793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:17.968372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:17.977645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:17.978080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:17.978111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:17.978129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:17.978367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:17.978611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:17.978635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:17.978651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:17.982229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:17.991709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:17.992189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:17.992221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:17.992239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:17.992477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:17.992721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:17.992745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:17.992762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:17.996338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:18.005594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:18.006033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:18.006082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:18.006101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:18.006360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:18.006604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:18.006629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:18.006644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:18.010214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:18.019467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:18.019936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:18.019967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:18.019985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:18.020234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:18.020477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:18.020502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:18.020518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:18.024083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:18.033329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:18.033804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:18.033835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:18.033853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:18.034111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:18.034355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:18.034380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:18.034396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:18.037963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:18.047251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:18.047731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:18.047758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:18.047789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:18.048048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:18.048303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:18.048328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:18.048343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.906 [2024-07-27 02:32:18.051910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.906 [2024-07-27 02:32:18.061195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.906 [2024-07-27 02:32:18.061630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.906 [2024-07-27 02:32:18.061661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:49.906 [2024-07-27 02:32:18.061679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:49.906 [2024-07-27 02:32:18.061918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:49.906 [2024-07-27 02:32:18.062175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.906 [2024-07-27 02:32:18.062200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.906 [2024-07-27 02:32:18.062216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.065801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.075097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.075539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.075570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.075588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.075826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.076082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.076107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.076136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.079704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.088984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.089475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.089502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.089517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.089762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.090006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.090031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.090047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.093626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.102900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.103385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.103416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.103433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.103671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.103924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.103948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.103964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.107541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.116818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.117286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.117316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.117334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.117572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.117816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.117840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.117856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.121436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.130711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.131177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.131208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.131226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.131465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.131709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.131733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.131749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.135331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.144611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.145075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.145106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.145124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.145363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.145607] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.145632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.145647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.149231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.158507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.158951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.158982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.159001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.168 [2024-07-27 02:32:18.159249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.168 [2024-07-27 02:32:18.159493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.168 [2024-07-27 02:32:18.159519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.168 [2024-07-27 02:32:18.159534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.168 [2024-07-27 02:32:18.163114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.168 [2024-07-27 02:32:18.172388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.168 [2024-07-27 02:32:18.172800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.168 [2024-07-27 02:32:18.172831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.168 [2024-07-27 02:32:18.172849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.173101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.173351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.173376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.173391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.176959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.186253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.186858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.186927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.186945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.187194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.187438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.187462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.187478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.191046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.200117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.200581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.200608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.200623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.200871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.201127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.201152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.201168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.204731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.214004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.214449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.214480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.214498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.214736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.214980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.215004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.215020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.218608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.227891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.228335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.228365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.228383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.228621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.228865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.228889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.228905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.232480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.241756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.242211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.242242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.242260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.242499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.242743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.242767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.242782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.246353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.255694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.256177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.256204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.256234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.256489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.256733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.256757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.256773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.260346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.269600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.270065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.270096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.270120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.270360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.270603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.270628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.270643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.274217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.283474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.283942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.283972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.283990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.284240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.284484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.169 [2024-07-27 02:32:18.284509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.169 [2024-07-27 02:32:18.284525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.169 [2024-07-27 02:32:18.288094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.169 [2024-07-27 02:32:18.297351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.169 [2024-07-27 02:32:18.297824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.169 [2024-07-27 02:32:18.297850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.169 [2024-07-27 02:32:18.297880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.169 [2024-07-27 02:32:18.298145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.169 [2024-07-27 02:32:18.298390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.170 [2024-07-27 02:32:18.298414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.170 [2024-07-27 02:32:18.298430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.170 [2024-07-27 02:32:18.301990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.170 [2024-07-27 02:32:18.311250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.170 [2024-07-27 02:32:18.311713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.170 [2024-07-27 02:32:18.311755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.170 [2024-07-27 02:32:18.311772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.170 [2024-07-27 02:32:18.312032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.170 [2024-07-27 02:32:18.312291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.170 [2024-07-27 02:32:18.312317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.170 [2024-07-27 02:32:18.312333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.170 [2024-07-27 02:32:18.315901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.170 [2024-07-27 02:32:18.325175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.170 [2024-07-27 02:32:18.325639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.170 [2024-07-27 02:32:18.325667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.170 [2024-07-27 02:32:18.325684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.170 [2024-07-27 02:32:18.325937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.170 [2024-07-27 02:32:18.326197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.170 [2024-07-27 02:32:18.326222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.170 [2024-07-27 02:32:18.326238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.329803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.339072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.339553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.339580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.339611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.339869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.340126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.430 [2024-07-27 02:32:18.340151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.430 [2024-07-27 02:32:18.340168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.343742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.352998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.353477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.353509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.353528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.353767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.354010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.430 [2024-07-27 02:32:18.354034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.430 [2024-07-27 02:32:18.354050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.357621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.366874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.367341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.367373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.367391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.367630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.367873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.430 [2024-07-27 02:32:18.367897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.430 [2024-07-27 02:32:18.367913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.371485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.380741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.381186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.381217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.381235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.381474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.381718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.430 [2024-07-27 02:32:18.381743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.430 [2024-07-27 02:32:18.381758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.385333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.394579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.395047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.395086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.395105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.395343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.395587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.430 [2024-07-27 02:32:18.395612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.430 [2024-07-27 02:32:18.395627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.430 [2024-07-27 02:32:18.399198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.430 [2024-07-27 02:32:18.408446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.430 [2024-07-27 02:32:18.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.430 [2024-07-27 02:32:18.408936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.430 [2024-07-27 02:32:18.408963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.430 [2024-07-27 02:32:18.409214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.430 [2024-07-27 02:32:18.409458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.409483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.409500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.413068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.422331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.422800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.422830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.422848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.423098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.423342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.423367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.423382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.426946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.436219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.436671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.436702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.436720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.436959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.437213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.437238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.437255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.440816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.450149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.450630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.450657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.450688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.450946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.451203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.451234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.451251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.454814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.464088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.464557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.464605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.464844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.465100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.465125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.465142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.468714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.477798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.478243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.478272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.478289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.478534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.478749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.478769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.478782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.481911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.491222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.491756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.491788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.491806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.492045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.492312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.492335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.492365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.495927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.505175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.505651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.505712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.505730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.505969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.506223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.506246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.506260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.509826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.518982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.519469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.519512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.519529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.519778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.520022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.520047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.520073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.523635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.532883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.431 [2024-07-27 02:32:18.533316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.431 [2024-07-27 02:32:18.533347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.431 [2024-07-27 02:32:18.533365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.431 [2024-07-27 02:32:18.533603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.431 [2024-07-27 02:32:18.533846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.431 [2024-07-27 02:32:18.533871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.431 [2024-07-27 02:32:18.533887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.431 [2024-07-27 02:32:18.537456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.431 [2024-07-27 02:32:18.546728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.432 [2024-07-27 02:32:18.547185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.432 [2024-07-27 02:32:18.547216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.432 [2024-07-27 02:32:18.547234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.432 [2024-07-27 02:32:18.547478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.432 [2024-07-27 02:32:18.547722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.432 [2024-07-27 02:32:18.547747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.432 [2024-07-27 02:32:18.547762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.432 [2024-07-27 02:32:18.551334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.432 [2024-07-27 02:32:18.560603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.432 [2024-07-27 02:32:18.561068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.432 [2024-07-27 02:32:18.561097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.432 [2024-07-27 02:32:18.561113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.432 [2024-07-27 02:32:18.561341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.432 [2024-07-27 02:32:18.561601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.432 [2024-07-27 02:32:18.561625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.432 [2024-07-27 02:32:18.561641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.432 [2024-07-27 02:32:18.565206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.432 [2024-07-27 02:32:18.574460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.432 [2024-07-27 02:32:18.574919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.432 [2024-07-27 02:32:18.574946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.432 [2024-07-27 02:32:18.574962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.432 [2024-07-27 02:32:18.575231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.432 [2024-07-27 02:32:18.575475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.432 [2024-07-27 02:32:18.575500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.432 [2024-07-27 02:32:18.575516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.432 [2024-07-27 02:32:18.579083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.432 [2024-07-27 02:32:18.588375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.432 [2024-07-27 02:32:18.588820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.432 [2024-07-27 02:32:18.588851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.432 [2024-07-27 02:32:18.588869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.432 [2024-07-27 02:32:18.589119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.432 [2024-07-27 02:32:18.589363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.432 [2024-07-27 02:32:18.589387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.432 [2024-07-27 02:32:18.589409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.691 [2024-07-27 02:32:18.592973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.691 [2024-07-27 02:32:18.602230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.691 [2024-07-27 02:32:18.602667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.691 [2024-07-27 02:32:18.602698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.691 [2024-07-27 02:32:18.602717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.691 [2024-07-27 02:32:18.602955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.691 [2024-07-27 02:32:18.603209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.691 [2024-07-27 02:32:18.603234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.691 [2024-07-27 02:32:18.603249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.691 [2024-07-27 02:32:18.606809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.691 [2024-07-27 02:32:18.616051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.691 [2024-07-27 02:32:18.616510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.691 [2024-07-27 02:32:18.616541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.691 [2024-07-27 02:32:18.616559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.691 [2024-07-27 02:32:18.616797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.691 [2024-07-27 02:32:18.617041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.691 [2024-07-27 02:32:18.617075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.691 [2024-07-27 02:32:18.617092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.691 [2024-07-27 02:32:18.620653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.691 [2024-07-27 02:32:18.629899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.691 [2024-07-27 02:32:18.630338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.691 [2024-07-27 02:32:18.630368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.691 [2024-07-27 02:32:18.630387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.691 [2024-07-27 02:32:18.630625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.691 [2024-07-27 02:32:18.630869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.691 [2024-07-27 02:32:18.630894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.691 [2024-07-27 02:32:18.630909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.691 [2024-07-27 02:32:18.634480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.691 [2024-07-27 02:32:18.643750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.691 [2024-07-27 02:32:18.644209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.691 [2024-07-27 02:32:18.644246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.691 [2024-07-27 02:32:18.644265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.691 [2024-07-27 02:32:18.644504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.691 [2024-07-27 02:32:18.644747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.691 [2024-07-27 02:32:18.644771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.691 [2024-07-27 02:32:18.644788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.648359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.657577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.658063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.658090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.658121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.658379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.658622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.658647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.658662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.662232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.671472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.671924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.671966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.671983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.672252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.672497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.672522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.672538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.676105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.685355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.685786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.685817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.685835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.686085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.686336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.686361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.686376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.689934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.699191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.699655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.699686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.699704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.699942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.700198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.700224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.700240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.703799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.713020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.713465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.713496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.713513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.713752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.713995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.714020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.714036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.717605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.726859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.727301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.727332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.727350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.727589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.727832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.727857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.727872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.731444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.740689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.741149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.741181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.741199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.741437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.741681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.741705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.741721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.745304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.754552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.755028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.755055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.755096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.755354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.755598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.755622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.755638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.759201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.692 [2024-07-27 02:32:18.768443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.692 [2024-07-27 02:32:18.768885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.692 [2024-07-27 02:32:18.768916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.692 [2024-07-27 02:32:18.768934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.692 [2024-07-27 02:32:18.769184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.692 [2024-07-27 02:32:18.769430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.692 [2024-07-27 02:32:18.769453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.692 [2024-07-27 02:32:18.769469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.692 [2024-07-27 02:32:18.773028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.693 [2024-07-27 02:32:18.782486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.693 [2024-07-27 02:32:18.782916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.693 [2024-07-27 02:32:18.782946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.693 [2024-07-27 02:32:18.782970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.693 [2024-07-27 02:32:18.783221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.693 [2024-07-27 02:32:18.783466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.693 [2024-07-27 02:32:18.783490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.693 [2024-07-27 02:32:18.783505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.693 [2024-07-27 02:32:18.787066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.693 [2024-07-27 02:32:18.796309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.693 [2024-07-27 02:32:18.796757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.693 [2024-07-27 02:32:18.796798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.693 [2024-07-27 02:32:18.796813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.693 [2024-07-27 02:32:18.797086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.693 [2024-07-27 02:32:18.797330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.693 [2024-07-27 02:32:18.797354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.693 [2024-07-27 02:32:18.797370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.693 [2024-07-27 02:32:18.800931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.693 [2024-07-27 02:32:18.810184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.693 [2024-07-27 02:32:18.810658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.693 [2024-07-27 02:32:18.810684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.693 [2024-07-27 02:32:18.810699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.693 [2024-07-27 02:32:18.810967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.693 [2024-07-27 02:32:18.811223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.693 [2024-07-27 02:32:18.811247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.693 [2024-07-27 02:32:18.811263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.693 [2024-07-27 02:32:18.814821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.693 [2024-07-27 02:32:18.824094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.693 [2024-07-27 02:32:18.824813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.693 [2024-07-27 02:32:18.824846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.693 [2024-07-27 02:32:18.824864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.693 [2024-07-27 02:32:18.825132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.693 [2024-07-27 02:32:18.825339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.693 [2024-07-27 02:32:18.825394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.693 [2024-07-27 02:32:18.825412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.693 [2024-07-27 02:32:18.828879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.693 [2024-07-27 02:32:18.837942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.693 [2024-07-27 02:32:18.838392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.693 [2024-07-27 02:32:18.838436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.693 [2024-07-27 02:32:18.838455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.693 [2024-07-27 02:32:18.838694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.693 [2024-07-27 02:32:18.838937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.693 [2024-07-27 02:32:18.838962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.693 [2024-07-27 02:32:18.838978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.693 [2024-07-27 02:32:18.842552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.851889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.852384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.852416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.852434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.852673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.852917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.852942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.852957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.856524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.865664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.866161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.866190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.866206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.866458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.866703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.866727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.866743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.870322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.879572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.880030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.880074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.880094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.880333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.880577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.880601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.880618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.884183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.893427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.893995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.894048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.894077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.894317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.894561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.894586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.894602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.898167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.907407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.907840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.907871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.907889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.908138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.908383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.908407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.908423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.911981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 [2024-07-27 02:32:18.921231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 [2024-07-27 02:32:18.921656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.921687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.921705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.953 [2024-07-27 02:32:18.921949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.953 [2024-07-27 02:32:18.922205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.953 [2024-07-27 02:32:18.922230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.953 [2024-07-27 02:32:18.922246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.953 [2024-07-27 02:32:18.925803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1183795 Killed "${NVMF_APP[@]}" "$@" 00:32:50.953 [2024-07-27 02:32:18.935052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.953 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:50.953 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:50.953 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:50.953 [2024-07-27 02:32:18.935522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.953 [2024-07-27 02:32:18.935553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.953 [2024-07-27 02:32:18.935571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:50.954 [2024-07-27 02:32:18.935809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:18.936052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:18.936086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:18.936102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1184747 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1184747 00:32:50.954 [2024-07-27 02:32:18.939661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1184747 ']' 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.954 02:32:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:50.954 [2024-07-27 02:32:18.948921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:18.949346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:18.949377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:18.949407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:18.949647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:18.949891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:18.949915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:18.949931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:18.953500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:18.962487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:18.962914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:18.962942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:18.962959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:18.963214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:18.963448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:18.963470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:18.963484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:18.966586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:18.975739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:18.976182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:18.976210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:18.976226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:18.976460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:18.976660] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:18.976680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:18.976693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:18.979845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:18.984744] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:32:50.954 [2024-07-27 02:32:18.984815] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.954 [2024-07-27 02:32:18.989241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:18.989706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:18.989735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:18.989751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:18.990011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:18.990251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:18.990275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:18.990289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:18.993483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:19.002675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:19.003087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:19.003115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:19.003131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:19.003346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:19.003579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:19.003599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:19.003613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:19.006696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:19.016018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:19.016461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:19.016489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:19.016505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:19.016761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:19.016962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:19.016982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:19.016996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:19.019972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.954 [2024-07-27 02:32:19.025193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:50.954 [2024-07-27 02:32:19.029675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.954 [2024-07-27 02:32:19.030152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.954 [2024-07-27 02:32:19.030181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.954 [2024-07-27 02:32:19.030198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.954 [2024-07-27 02:32:19.030425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.954 [2024-07-27 02:32:19.030678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.954 [2024-07-27 02:32:19.030703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.954 [2024-07-27 02:32:19.030719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.954 [2024-07-27 02:32:19.034262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.954 [2024-07-27 02:32:19.043660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.955 [2024-07-27 02:32:19.044119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.955 [2024-07-27 02:32:19.044149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.955 [2024-07-27 02:32:19.044165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.955 [2024-07-27 02:32:19.044408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.955 [2024-07-27 02:32:19.044661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.955 [2024-07-27 02:32:19.044686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.955 [2024-07-27 02:32:19.044702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.955 [2024-07-27 02:32:19.048231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.955 [2024-07-27 02:32:19.054992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:50.955 [2024-07-27 02:32:19.057528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.955 [2024-07-27 02:32:19.057986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.955 [2024-07-27 02:32:19.058029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.955 [2024-07-27 02:32:19.058047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.955 [2024-07-27 02:32:19.058297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.955 [2024-07-27 02:32:19.058551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.955 [2024-07-27 02:32:19.058576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.955 [2024-07-27 02:32:19.058593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.955 [2024-07-27 02:32:19.062137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.955 [2024-07-27 02:32:19.071461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.955 [2024-07-27 02:32:19.072103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.955 [2024-07-27 02:32:19.072144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.955 [2024-07-27 02:32:19.072164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.955 [2024-07-27 02:32:19.072415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.955 [2024-07-27 02:32:19.072678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.955 [2024-07-27 02:32:19.072702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.955 [2024-07-27 02:32:19.072734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.955 [2024-07-27 02:32:19.076265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.955 [2024-07-27 02:32:19.085320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.955 [2024-07-27 02:32:19.085823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.955 [2024-07-27 02:32:19.085855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.955 [2024-07-27 02:32:19.085873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.955 [2024-07-27 02:32:19.086139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.955 [2024-07-27 02:32:19.086379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.955 [2024-07-27 02:32:19.086404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.955 [2024-07-27 02:32:19.086420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.955 [2024-07-27 02:32:19.089973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.955 [2024-07-27 02:32:19.099146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.955 [2024-07-27 02:32:19.099675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.955 [2024-07-27 02:32:19.099704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:50.955 [2024-07-27 02:32:19.099721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:50.955 [2024-07-27 02:32:19.099972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:50.955 [2024-07-27 02:32:19.100232] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.955 [2024-07-27 02:32:19.100256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.955 [2024-07-27 02:32:19.100270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.955 [2024-07-27 02:32:19.103787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.215 [2024-07-27 02:32:19.113132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.215 [2024-07-27 02:32:19.113737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.215 [2024-07-27 02:32:19.113776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.215 [2024-07-27 02:32:19.113795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.215 [2024-07-27 02:32:19.114064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.215 [2024-07-27 02:32:19.114295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.215 [2024-07-27 02:32:19.114318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.215 [2024-07-27 02:32:19.114334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.215 [2024-07-27 02:32:19.117863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.215 [2024-07-27 02:32:19.126966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.215 [2024-07-27 02:32:19.127602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.215 [2024-07-27 02:32:19.127647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.215 [2024-07-27 02:32:19.127666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.215 [2024-07-27 02:32:19.127906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.215 [2024-07-27 02:32:19.128181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.215 [2024-07-27 02:32:19.128204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.215 [2024-07-27 02:32:19.128219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.215 [2024-07-27 02:32:19.131709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.215 [2024-07-27 02:32:19.140802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.215 [2024-07-27 02:32:19.141318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.215 [2024-07-27 02:32:19.141347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.215 [2024-07-27 02:32:19.141364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.215 [2024-07-27 02:32:19.141617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.215 [2024-07-27 02:32:19.141862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.215 [2024-07-27 02:32:19.141887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.215 [2024-07-27 02:32:19.141903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.215 [2024-07-27 02:32:19.145445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.215 [2024-07-27 02:32:19.147361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.215 [2024-07-27 02:32:19.147397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.215 [2024-07-27 02:32:19.147420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.215 [2024-07-27 02:32:19.147433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.215 [2024-07-27 02:32:19.147445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.215 [2024-07-27 02:32:19.147508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.215 [2024-07-27 02:32:19.147728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.215 [2024-07-27 02:32:19.147733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.215 [2024-07-27 02:32:19.154425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.215 [2024-07-27 02:32:19.154953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.215 [2024-07-27 02:32:19.154988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.215 [2024-07-27 02:32:19.155007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.215 [2024-07-27 02:32:19.155238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.215 [2024-07-27 02:32:19.155475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.215 [2024-07-27 02:32:19.155498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.215 [2024-07-27 02:32:19.155522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.215 [2024-07-27 02:32:19.158738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.215 [2024-07-27 02:32:19.167941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.215 [2024-07-27 02:32:19.168545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.215 [2024-07-27 02:32:19.168585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.215 [2024-07-27 02:32:19.168605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.215 [2024-07-27 02:32:19.168843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.215 [2024-07-27 02:32:19.169069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.169091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.169108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.172297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.181660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.182255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.182298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.182318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.182560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.182776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.182797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.182814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.186065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.195315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.195939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.195979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.195998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.196231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.196467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.196489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.196506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.199744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.208965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.209572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.209618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.209639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.209875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.210120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.210144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.210160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.213337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.222663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.223296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.223336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.223366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.223603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.223827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.223849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.223865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.227032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.236255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.236691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.236719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.236736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.236952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.237210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.237233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.237248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.240461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.249874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.250324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.250363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.250379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.250594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.250820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.250843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.250858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.254109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.216 [2024-07-27 02:32:19.263495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.263955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.263984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.264000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 [2024-07-27 02:32:19.264225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 [2024-07-27 02:32:19.264458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.264480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.264494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.267742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.277247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.216 [2024-07-27 02:32:19.277677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.216 [2024-07-27 02:32:19.277706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.216 [2024-07-27 02:32:19.277723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.216 [2024-07-27 02:32:19.277953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.216 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.216 [2024-07-27 02:32:19.278198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.216 [2024-07-27 02:32:19.278221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.216 [2024-07-27 02:32:19.278236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.216 [2024-07-27 02:32:19.281522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.216 [2024-07-27 02:32:19.281749] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.216 [2024-07-27 02:32:19.290805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.216 [2024-07-27 02:32:19.291243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.217 [2024-07-27 02:32:19.291271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.217 [2024-07-27 02:32:19.291287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.217 [2024-07-27 02:32:19.291529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.217 [2024-07-27 02:32:19.291735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.217 [2024-07-27 02:32:19.291766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.217 [2024-07-27 02:32:19.291779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.217 [2024-07-27 02:32:19.294945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.217 [2024-07-27 02:32:19.304362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.217 [2024-07-27 02:32:19.304770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.217 [2024-07-27 02:32:19.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.217 [2024-07-27 02:32:19.304813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.217 [2024-07-27 02:32:19.305042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.217 [2024-07-27 02:32:19.305263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.217 [2024-07-27 02:32:19.305284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.217 [2024-07-27 02:32:19.305298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.217 [2024-07-27 02:32:19.308545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.217 [2024-07-27 02:32:19.317969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.217 [2024-07-27 02:32:19.318538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.217 [2024-07-27 02:32:19.318573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.217 [2024-07-27 02:32:19.318592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.217 [2024-07-27 02:32:19.318823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.217 [2024-07-27 02:32:19.319053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.217 [2024-07-27 02:32:19.319085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.217 [2024-07-27 02:32:19.319102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.217 [2024-07-27 02:32:19.322379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.217 Malloc0 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.217 [2024-07-27 02:32:19.331595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.217 [2024-07-27 02:32:19.332139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.217 [2024-07-27 02:32:19.332173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.217 [2024-07-27 02:32:19.332191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.217 [2024-07-27 02:32:19.332411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.217 [2024-07-27 02:32:19.332633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.217 [2024-07-27 02:32:19.332656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.217 [2024-07-27 02:32:19.332672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.217 [2024-07-27 02:32:19.335924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.217 [2024-07-27 02:32:19.345209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.217 [2024-07-27 02:32:19.345654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.217 [2024-07-27 02:32:19.345682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1729b50 with addr=10.0.0.2, port=4420 00:32:51.217 [2024-07-27 02:32:19.345699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1729b50 is same with the state(5) to be set 00:32:51.217 [2024-07-27 02:32:19.345914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1729b50 (9): Bad file descriptor 00:32:51.217 [2024-07-27 02:32:19.346144] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.217 [2024-07-27 02:32:19.346167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.217 [2024-07-27 02:32:19.346181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.217 [2024-07-27 02:32:19.349495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.217 [2024-07-27 02:32:19.350139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.217 02:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1184081 00:32:51.217 [2024-07-27 02:32:19.358699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.475 [2024-07-27 02:32:19.558822] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:01.453 00:33:01.453 Latency(us) 00:33:01.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.453 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:01.453 Verification LBA range: start 0x0 length 0x4000 00:33:01.453 Nvme1n1 : 15.00 6658.44 26.01 9395.34 0.00 7948.32 1110.47 22039.51 00:33:01.453 =================================================================================================================== 00:33:01.453 Total : 6658.44 26.01 9395.34 0.00 7948.32 1110.47 22039.51 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.453 rmmod nvme_tcp 00:33:01.453 rmmod nvme_fabrics 00:33:01.453 rmmod nvme_keyring 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1184747 ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1184747 ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1184747' 00:33:01.453 killing process with pid 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1184747 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.453 02:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.362 00:33:03.362 real 0m22.210s 00:33:03.362 user 0m59.803s 00:33:03.362 sys 0m4.091s 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.362 ************************************ 00:33:03.362 END TEST nvmf_bdevperf 00:33:03.362 ************************************ 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.362 ************************************ 00:33:03.362 START TEST nvmf_target_disconnect 00:33:03.362 ************************************ 00:33:03.362 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:03.362 * Looking for test storage... 00:33:03.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.363 02:32:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:04.774 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:04.774 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.774 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:04.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:04.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.775 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.035 02:32:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:05.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:33:05.035 00:33:05.035 --- 10.0.0.2 ping statistics --- 00:33:05.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.035 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:33:05.035 00:33:05.035 --- 10.0.0.1 ping statistics --- 00:33:05.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.035 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:05.035 ************************************ 00:33:05.035 START TEST nvmf_target_disconnect_tc1 00:33:05.035 ************************************ 00:33:05.035 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:05.036 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.036 [2024-07-27 02:32:33.139659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.036 [2024-07-27 02:32:33.139736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a13e0 with addr=10.0.0.2, port=4420 00:33:05.036 [2024-07-27 02:32:33.139767] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:05.036 [2024-07-27 02:32:33.139804] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:05.036 [2024-07-27 02:32:33.139820] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:05.036 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:05.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:05.036 Initializing NVMe Controllers 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.036 00:33:05.036 real 0m0.092s 00:33:05.036 user 0m0.040s 00:33:05.036 sys 0m0.052s 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:05.036 ************************************ 00:33:05.036 END TEST nvmf_target_disconnect_tc1 00:33:05.036 ************************************ 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:05.036 ************************************ 00:33:05.036 START TEST nvmf_target_disconnect_tc2 00:33:05.036 ************************************ 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.036 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:05.296 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.296 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1187905 00:33:05.296 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:05.296 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1187905 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1187905 ']' 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:05.297 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.297 [2024-07-27 02:32:33.247144] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:33:05.297 [2024-07-27 02:32:33.247241] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.297 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.297 [2024-07-27 02:32:33.289908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:05.297 [2024-07-27 02:32:33.317285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:05.297 [2024-07-27 02:32:33.409324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.297 [2024-07-27 02:32:33.409384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.297 [2024-07-27 02:32:33.409398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.297 [2024-07-27 02:32:33.409423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.297 [2024-07-27 02:32:33.409433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.297 [2024-07-27 02:32:33.409515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:05.297 [2024-07-27 02:32:33.409579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:05.297 [2024-07-27 02:32:33.409600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:05.297 [2024-07-27 02:32:33.409603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 Malloc0 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 [2024-07-27 02:32:33.578401] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 [2024-07-27 02:32:33.606692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1187936 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:05.556 02:32:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:05.556 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.099 02:32:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1187905 00:33:08.099 02:32:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.099 Read completed with error (sct=0, sc=8) 00:33:08.099 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 [2024-07-27 02:32:35.631907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 [2024-07-27 02:32:35.632269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 [2024-07-27 02:32:35.632561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Write completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.100 starting I/O failed 00:33:08.100 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Write completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 Read completed with error (sct=0, sc=8) 00:33:08.101 starting I/O failed 00:33:08.101 [2024-07-27 02:32:35.632883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:08.101 [2024-07-27 02:32:35.633155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.633188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.633348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.633375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.633535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.633560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.633738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.633770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.633960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.633989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.634179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.634206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.634373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.634398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.634548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.634575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.634776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.634802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.635032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.635248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.635451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.635633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.635835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.635992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.636018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.636172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.636197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.636379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.636405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.636556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.636582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.636786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.636814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.637014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.637041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.637236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.637262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.637443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.637469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.637649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.637674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.637864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-27 02:32:35.637889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-27 02:32:35.638077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.638104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.638276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.638302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.638506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.638532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.638682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.638724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.638896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.638941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.639170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.639196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.639385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.639425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.639626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.639656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.639885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.639930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.640107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.640135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.640311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.640340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.640575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.640619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.640870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.640919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.641121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.641147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.641430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.641473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.641674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.641717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.641896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.641922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.642101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.642128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.642300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.642344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.642595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.642626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.642798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.642824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.642978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.643003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.643203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.643248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.643421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.643465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.643742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.643768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.643944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.643970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.644117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.644143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.644353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.644380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.644573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.644602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.644790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.644815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.644963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.644989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.645203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.645244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-27 02:32:35.645450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-27 02:32:35.645479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.645881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.645937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.646168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.646197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.646382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.646407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.646583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.646608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.646784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.646809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.646993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.647020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.647209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.647235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.647385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.647411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.647616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.647644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.647886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.647912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.648096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.648122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.648306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.648331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.648659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.648714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.649956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.649984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.650164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.650190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.650333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.650358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.650552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.650580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.650866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.650891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.651074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.651117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.651317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.651343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.651493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.651536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.651825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.651876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.652115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.652141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.652319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.652344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.652513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-27 02:32:35.652538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-27 02:32:35.652689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.652715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.652904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.652969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.653174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.653199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.653399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.653427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.653591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.653633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.653860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.653885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.654070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.654096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.654277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.654302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.654474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.654502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.654671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.654699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.655108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.655138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.655317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.655342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.655543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.655569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.655744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.655770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.655922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.655947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.656160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.656186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.656341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.656366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.656544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.656569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.656709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.656735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.656910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.656938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.657118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.657144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.657323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.657348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.657519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.657545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.657723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.657778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.657981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.658188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.658362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.658556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.658734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.658965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.658993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.659220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.659246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.659403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.659430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.659654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.659682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-27 02:32:35.659887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-27 02:32:35.659915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.660137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.660163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.660315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.660340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.660490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.660515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.660730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.660754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.660900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.660925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.661077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.661103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.661248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.661273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.661419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.661444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.661594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.661619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.661806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.661833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.662035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.662065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.662233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.662258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.662462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.662487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.662668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.662693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.662862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.662903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.663095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.663121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.663295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.663486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.663688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.663872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.663915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.664104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.664130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.664305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.664330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.664476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.664502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.664671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.664696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.664838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.664863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-27 02:32:35.665037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-27 02:32:35.665067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.665239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.665265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.665449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.665488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.665670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.665697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.665875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.665901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.666075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.666102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.666280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.666306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.666484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.666510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.666676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.666721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.666954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.666997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.667158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.667185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.667391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.667417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.667593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.667618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.667797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.667824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.668007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.668033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.668209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.668252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.668485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.668528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.668804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.668829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.669005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.669030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.669221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.669264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.669494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.669525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.669701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.669742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.670098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.670124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.670326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.670354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.670524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.670552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.670745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.670770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.670971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.670999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.671195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.671221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.671378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.671403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.671606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.671635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.671830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.671855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.672036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.672067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.672248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.672273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.672466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.672494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-27 02:32:35.672684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-27 02:32:35.672712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.672972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.673037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.673248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.673290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.673462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.673490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.673681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.673709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.673932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.673957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.674101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.674127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.674277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.674302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.674475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.674501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.674655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.674680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.674884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.674913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.675122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.675149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.675374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.675402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.675579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.675604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.675774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.675800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.676006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.676263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.676304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.676511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.676557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.676793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.676819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.676959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.676985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.677135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.677161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.677338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.677364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.677515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.677541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.677704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.677732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.677913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.677940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.678117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.678161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.678390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.678415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.678617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.678643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.678817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.678985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.679010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.679159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.679185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.679366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.679395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.679582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.679610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.679827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-27 02:32:35.679853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-27 02:32:35.680050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.680081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.680261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.680286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.680439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.680464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.680699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.680727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.680952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.680977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.681154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.681180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.681358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.681384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.681598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.681781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.681806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.681958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.681983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.682141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.682167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.682344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.682369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.682508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.682533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.682677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.682718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.682904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.682932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.683126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.683152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.683331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.683357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.683563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.683589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.683816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.683844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.684072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.684116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.684291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.684317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.684530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.684558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.684771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.684799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.685017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.685042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.685228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.685253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.685406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.685431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.685600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.685626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.685824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.685853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.686081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.686124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.686276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.686302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.686455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.686480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.686657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.686699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.686994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.687065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-27 02:32:35.687260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-27 02:32:35.687285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.687440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.687466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.687673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.687701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.687918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.687943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.688117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.688143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.688300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.688341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.688568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.688593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.688742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.688768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.688975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.689155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.689339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.689545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.689734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.689973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.689998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.690171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.690197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.690347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.690372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.690547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.690572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.690750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.690775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.690983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.691184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.691210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.691399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.691427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.691611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.691639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.691863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.691888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.692064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.692093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.692286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.692315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.692547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.692572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.692749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.692778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.692944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.692972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.693143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.693169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.693376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.693401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.693543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.693568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.693743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.693768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.693973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.694001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.694177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-27 02:32:35.694203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-27 02:32:35.694360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.694386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.694533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.694558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.694736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.694761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.694916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.694941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.695111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.695137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.695371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.695396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.695578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.695603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.695783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.695808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.696012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.696037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.696193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.696218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.696391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.696416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.696589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.696615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.696838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.696866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.697052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.697083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.697243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.697268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.697442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.697468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.697617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.697644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.697845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.697870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.698019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.698044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.698260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.698293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.698516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.698542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.698705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-27 02:32:35.698733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-27 02:32:35.698921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.698950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.699148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.699175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.699356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.699382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.699534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.699559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.699741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.699766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.699951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.699977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.700176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.700205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.700406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.700431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.700568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.700594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.700767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.700792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.700945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.700970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.701184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.701229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.701474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.701502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.701656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.701681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.701906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.701934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.702111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.702137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.702342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.702367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.702600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.702652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.702871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.702899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.703104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.703130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.703303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.703329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.703532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.703558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.703703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.703729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.703907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.703933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.704129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.704160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.704337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.704363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.704532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.704557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.704723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.704752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.704977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.705003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.705183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.705210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.705359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.705385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.705534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.705559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.705768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.705820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.706008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.706036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-27 02:32:35.706214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-27 02:32:35.706240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.706428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.706456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.706661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.706687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.706839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.706866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.707077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.707104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.707255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.707280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.707456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.707483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.707723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.707752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.707946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.707974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.708150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.708175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.708424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.708602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.708628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.708785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.708812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.708968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.708994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.709171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.709197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.709400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.709426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.709604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.709629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.709786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.709812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.709980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.710009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.710202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.710229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.710429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.710455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.710666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.710692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.710867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.710893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.711074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.711100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.711256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.711281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.711426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.711453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.711648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.711677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.711900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.711925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.712101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.712127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.712310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.712337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.712491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.712521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.712699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.712724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.712917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.712945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-27 02:32:35.713150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-27 02:32:35.713176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.713348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.713377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.713595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.713624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.713814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.713839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.713985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.714012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.714183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.714209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.714388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.714414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.714637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.714665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.714895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.714923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.715135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.715161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.715365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.715391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.715572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.715598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.715764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.715790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.715966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.715991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.716192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.716218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.716389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.716415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.716585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.716610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.716759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.716786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.716969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.716994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.717177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.717203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.717380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.717406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.717584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.717609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.717785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.717811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.717960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.717985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.718162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.718188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.718326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.718571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.718599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.718828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.718854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.719048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.719246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.719444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.719650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.719820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-27 02:32:35.719974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-27 02:32:35.720000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.720185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.720211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.720385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.720413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.720615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.720640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.720784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.720814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.720959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.720986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.721160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.721185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.721372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.721397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.721571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.721596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.721773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.721798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.721992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.722021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.722194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.722219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.722374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.722401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.722576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.722602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.722781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.722806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.722982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.723185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.723388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.723568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.723774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.723947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.723973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.724153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.724179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.724324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.724351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.724558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.724583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.724782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.724808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.724976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.725005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.725226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.725252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.725453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.725478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.725679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.725707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.725883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.725908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.726100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.726127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.726307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.726333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.726533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-27 02:32:35.726559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-27 02:32:35.726735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.726762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.726952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.726980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.727172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.727201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.727374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.727400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.727578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.727603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.727807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.727835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.728054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.728086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.728292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.728318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.728515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.728544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.728736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.728762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.728955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.728983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.729156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.729187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.729398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.729423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.729618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.729646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.729836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.729864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.730033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.730064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.730268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.730491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.730520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.730722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.730918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.730946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.731147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.731173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.731346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.731372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.731570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.731596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.731762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.731790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.731957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.731983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.732200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.732226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.732405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.732430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.732632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.732658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.732802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.732827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.733025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.733050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.733210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.733235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.733437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.733462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.733661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-27 02:32:35.733686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-27 02:32:35.733887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.733913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.734115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.734142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.734339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.734368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.734592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.734617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.734788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.734813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.735011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.735040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.735246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.735271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.735472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.735501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.735689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.735717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.735946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.735972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.736134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.736163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.736391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.736417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.736591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.736616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.736826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.736852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.737025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.737050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.737230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.737256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.737428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.737470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.737634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.737663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.737841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.737866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.738075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.738101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.738243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.738268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.738443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.738468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.738619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.738646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.738842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.738870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.739070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.739098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.739291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.739317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.739535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.739560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.739716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.739742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-27 02:32:35.739944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-27 02:32:35.739969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.740117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.740143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.740296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.740321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.740520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.740545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.740801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.740827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.740975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.741001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.741189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.741218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.741392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.741419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.741597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.741622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.741799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.741824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.742028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.742057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.742248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.742274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.742426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.742452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.742629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.742654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.742828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.742853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.743031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.743057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.743238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.743264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.743439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.743468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.743678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.743703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.743927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.743955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.744157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.744182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.744414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.744442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.744612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.744640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.744826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.744852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.745032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.745057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.745210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.745236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.745406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.745432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.745618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.745643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.745834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.745863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.746153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.746179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.746356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.746381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.746581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.746609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.746786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.746812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.746984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.747010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-27 02:32:35.747189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-27 02:32:35.747218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.747394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.747419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.747597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.747623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.747767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.747794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.747971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.747997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.748150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.748177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.748398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.748426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.748623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.748648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.748874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.748902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.749094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.749122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.749333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.749358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.749568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.749597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.749815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.749842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.750011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.750038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.750223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.750248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.750404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.750429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.750585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.750611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.750763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.750806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.751037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.751067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.751268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.751293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.751526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.751552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.751725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.751751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.751989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.752015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.752201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.752231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.752418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.752443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.752659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.752685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.752858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.752886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.753053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.753087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.753287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.753313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.753486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.753512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.753737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.753766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.753962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.753988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-27 02:32:35.754145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-27 02:32:35.754171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.754354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.754380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.754587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.754612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.754768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.754793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.754994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.755019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.755249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.755275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.755445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.755471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.755699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.755725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.755892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.755918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.756097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.756123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.756298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.756323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.756502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.756528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.756704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.756730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.756969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.756994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.757145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.757172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.757395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.757424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.757658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.757684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.757864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.757890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.758089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.758118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.758342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.758367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.758574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.758599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.758799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.758827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.759056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.759086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.759262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.759288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.759512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.759540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.759713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.759741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.759963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.759991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.760193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.760219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.760399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.760424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.760656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.760878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.760906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.761098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.761131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.761353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.761378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.761586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.761614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-27 02:32:35.761808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-27 02:32:35.761837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.762033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.762068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.762271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.762296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.762475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.762500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.762651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.762677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.762866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.762894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.763091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.763117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.763295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.763321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.763523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.763548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.763746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.763774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.763975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.764001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.764206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.764232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.764424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.764453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.764650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.764675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.764825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.764850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.765023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.765049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.765232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.765258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.765427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.765453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.765629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.765658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.765883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.765908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.766087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.766113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.766303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.766329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.766507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.766533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.766725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.766754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.766951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.766979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.767171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.767197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.767377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.767403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.767632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.767661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.767867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.767892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.768097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.768123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.768292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.768518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.768545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.768753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.768778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.768934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.768960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.769106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.120 [2024-07-27 02:32:35.769132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.120 qpair failed and we were unable to recover it. 00:33:08.120 [2024-07-27 02:32:35.769288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.769313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.769491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.769516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.769695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.769725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.769920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.769948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.770156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.770183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.770361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.770387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.770608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.770637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.770843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.770871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.771074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.771100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.771276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.771318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.771534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.771562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.771743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.771768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.771920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.771945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.772153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.772181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.772350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.772376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.772534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.772560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.772731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.772757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.772962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.772991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.773172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.773198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.773369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.773395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.773570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.773596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.773773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.773799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.773971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.773997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.774189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.774215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.774405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.774434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.774639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.774669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.774843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.121 [2024-07-27 02:32:35.774870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.121 qpair failed and we were unable to recover it. 00:33:08.121 [2024-07-27 02:32:35.775056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.775091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.775244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.775272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.775441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.775467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.775701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.775730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.775902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.775931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.776126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.776152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.776347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.776376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.776539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.776569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.776773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.776799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.776961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.776989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.777189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.777215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.777419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.777445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.777646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.777674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.777899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.777925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.778077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.778104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.778300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.778335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.778525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.778553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.778726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.778752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.778908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.778933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.779113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.779140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.779324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.779351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.779547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.779575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.779766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.779795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.779993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.780019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.780173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.780199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.780356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.780381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.780561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.780586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.780767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.780793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.780993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.781018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.781273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.781299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.781500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.781528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.781721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.781749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.781933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.781958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.782181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.122 [2024-07-27 02:32:35.782210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.122 qpair failed and we were unable to recover it. 00:33:08.122 [2024-07-27 02:32:35.782408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.782435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.782614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.782639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.782837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.782863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.783039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.783075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.783275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.783301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.783460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.783486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.783656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.783681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.783931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.783980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.784184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.784211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.784365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.784391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.784544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.784571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.784760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.784790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.784983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.785218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.785244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.785448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.785668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.785697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.785895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.786117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.786145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.786341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.786369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.786571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.786596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.786796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.786824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.787035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.787080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.787286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.787312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.787493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.787519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.787720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.787749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.787943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.787968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.788121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.788147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.788348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.788375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.788523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.788549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.788743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.788772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.788971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.788997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.789204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.123 [2024-07-27 02:32:35.789230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.123 qpair failed and we were unable to recover it. 00:33:08.123 [2024-07-27 02:32:35.789440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.789468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.789643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.789670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.789842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.789867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.790102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.790131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.790337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.790363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.790520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.790545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.790748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.790774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.790931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.791140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.791166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.791369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.791395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.791569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.791594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.791777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.791802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.792006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.792035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.792242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.792270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.792447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.792474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.792688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.792713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.792918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.792947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.793141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.793167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.793372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.793401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.793606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.793634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.793856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.793881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.794030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.794072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.794253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.794278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.794456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.794482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.794661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.794687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.794879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.794907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.795139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.795166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.795320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.795347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.795545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.795571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.795722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.795752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.795896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.795922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.796100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.796127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.124 [2024-07-27 02:32:35.796300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.124 [2024-07-27 02:32:35.796327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.124 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.796528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.796553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.796723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.796751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.796954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.796981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.797157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.797183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.797333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.797359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.797560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.797585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.797785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.797814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.798031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.798066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.798272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.798298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.798461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.798488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.798638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.798664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.798868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.798894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.799121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.799149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.799331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.799356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.799531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.799557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.799711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.799737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.799945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.799971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.800109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.800135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.800315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.800341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.800491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.800517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.800689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.800715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.800919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.800947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.801144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.801173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.801382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.801411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.801629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.801655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.801887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.801915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.802113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.802139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.802313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.802339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.802480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.802506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.802729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.802757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.802981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.803010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.803206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.803235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.803433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.803458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.803669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.125 [2024-07-27 02:32:35.803697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.125 qpair failed and we were unable to recover it. 00:33:08.125 [2024-07-27 02:32:35.803890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.803918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.804094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.804119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.804318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.804347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.804504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.804529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.804702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.804727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.804918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.804946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.805117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.805143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.805291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.805317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.805545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.805573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.805736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.805764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.805963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.805989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.806153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.806179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.806325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.806351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.806581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.806610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.806787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.806813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.806972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.806997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.807233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.807263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.807454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.807483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.807677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.807704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.807856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.807883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.808064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.808287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.808315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.808478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.808505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.808703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.808732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.808924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.808953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.809155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.809181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.126 qpair failed and we were unable to recover it. 00:33:08.126 [2024-07-27 02:32:35.809385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.126 [2024-07-27 02:32:35.809411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.809590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.809616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.809787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.809812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.809997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.810026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.810256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.810282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.810510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.810536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.810685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.810710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.810903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.810931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.811134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.811160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.811361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.811390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.811550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.811578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.811748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.811774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.811946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.811971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.812200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.812229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.812396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.812424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.812657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.812683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.812828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.812858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.813008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.813034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.813242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.813268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.813481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.813509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.813738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.813763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.813991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.814019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.814226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.814270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.814488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.814517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.814687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.814712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.814862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.814888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.815076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.815103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.815299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.815328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.815563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.815589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.815743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.815769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.815947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.815973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.816137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.816163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.816340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.816366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.816586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.816614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.816810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.127 [2024-07-27 02:32:35.816839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.127 qpair failed and we were unable to recover it. 00:33:08.127 [2024-07-27 02:32:35.817048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.817082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.817278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.817304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.817514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.817542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.817738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.817767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.817958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.817987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.818215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.818241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.818445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.818474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.818668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.818918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.818946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.819121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.819157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.819334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.819359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.819608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.819829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.819854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.820027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.820053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.820236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.820262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.820409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.820450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.820688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.820713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.820862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.820889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.821110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.821139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.821309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.821335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.821533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.821561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.821735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.821764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.821944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.821970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.822143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.822169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.822342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.822368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.822538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.822563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.822710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.822736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.822929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.822958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.823118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.823146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.823342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.823368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.823521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.823547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.823754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.823780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.823943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.823972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.824146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.824172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.128 qpair failed and we were unable to recover it. 00:33:08.128 [2024-07-27 02:32:35.824346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.128 [2024-07-27 02:32:35.824371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.824552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.824579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.824735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.824761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.824913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.824938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.825108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.825134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.825307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.825333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.825524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.825549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.825702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.825728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.825915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.825940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.826162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.826191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.826385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.826414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.826613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.826638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.826790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.826815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.826995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.827021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.827216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.827242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.827445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.827471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.827667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.827862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.827890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.828070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.828097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.828252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.828482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.828507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.828719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.828762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.828967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.828997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.829201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.829227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.829372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.829415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.829616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.829642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.829789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.829833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.830024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.830049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.830273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.830298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.830547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.830574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.830743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.830768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.830940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.830968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.831195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.831221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.831418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.831446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.831668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.129 [2024-07-27 02:32:35.831696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.129 qpair failed and we were unable to recover it. 00:33:08.129 [2024-07-27 02:32:35.831866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.831892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.832095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.832124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.832315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.832344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.832540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.832565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.832743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.832769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.832943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.832969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.833146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.833173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.833344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.833370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.833526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.833552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.833847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.833899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.834133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.834159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.834428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.834456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.834676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.834701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.834984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.835033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.835236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.835276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.835464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.835499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.835696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.835722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.835943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.835972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.836181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.836207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.836389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.836420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.836573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.836599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.836775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.836801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.836999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.837221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.837431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.837598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.837777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.837954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.837979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.838169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.838196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.838365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.838389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.838581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.838621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.838828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.838872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.839076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.839103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.839333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.839362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.130 [2024-07-27 02:32:35.839599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.130 [2024-07-27 02:32:35.839626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.130 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.839779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.839805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.839975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.840000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.840194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.840220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.840445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.840473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.840702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.840727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.840906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.840931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.841102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.841128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.841277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.841302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.841533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.841561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.841733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.841758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.841902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.841927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.842100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.842131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.842289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.842314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.842554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.842579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.842844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.842872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.843076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.843102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.843252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.843278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.843455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.843481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.843654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.843679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.843825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.844040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.844079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.844234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.844259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.844424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.844452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.844767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.844829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.845056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.845107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.845295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.845322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.845474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.845500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.845722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.845775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.845974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.131 [2024-07-27 02:32:35.846003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.131 qpair failed and we were unable to recover it. 00:33:08.131 [2024-07-27 02:32:35.846209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.846235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.846381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.846422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.846627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.846676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.846837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.846865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.847077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.847103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.847280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.847305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.847525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.847576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.847780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.847805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.847980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.848006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.848175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.848208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.848388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.848590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.848619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.848781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.848807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.849005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.849033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.849220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.849247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.849438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.849464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.849684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.849709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.849882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.849907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.850109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.850325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.850351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.850505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.850530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.850687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.850712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.850900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.850928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.851134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.851161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.851332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.851357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.851533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.851558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.851724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.851764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.851985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.852017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.852236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.852264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.852446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.852472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.852811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.852882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.853080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.853124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.853277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.853302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.853495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.853521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.853797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.132 [2024-07-27 02:32:35.853848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.132 qpair failed and we were unable to recover it. 00:33:08.132 [2024-07-27 02:32:35.854074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.854100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.854254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.854283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.854521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.854572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.854875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.854914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.855135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.855163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.855347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.855373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.855578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.855608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.855873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.855900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.856106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.856147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.856325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.856351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.856550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.856614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.856846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.856872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.857043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.857077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.857280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.857305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.857505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.857533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.857779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.857806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.857981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.858010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.858185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.858211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.858404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.858434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.858723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.858767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.858979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.859006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.859186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.859212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.859393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.859418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.859655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.859686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.859883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.859912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.860203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.860229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.860408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.860451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.860827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.860893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.861101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.861132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.861336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.861368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.861620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.861672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.861910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.861935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.862111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.862137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.862293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.862318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.862511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.133 [2024-07-27 02:32:35.862539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.133 qpair failed and we were unable to recover it. 00:33:08.133 [2024-07-27 02:32:35.862855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.862925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.863156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.863184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.863361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.863386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.863537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.863563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.863741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.863766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.863962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.863990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.864196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.864222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.864388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.864417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.864653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.864693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.864873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.864903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.865130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.865156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.865335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.865368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.865566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.865597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.865815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.865844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.866074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.866100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.866247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.866273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.866597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.866651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.866870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.866899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.867098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.867124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.867304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.867330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.867495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.867525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.867731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.867766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.867919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.867945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.868048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dae470 is same with the state(5) to be set 00:33:08.134 [2024-07-27 02:32:35.868271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.868299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.868481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.868509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.868703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.868728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.868892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.868920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.869156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.869182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.869363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.869389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.869567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.869596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.869791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.869820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.870018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.870274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.870314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.870498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.870524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.134 qpair failed and we were unable to recover it. 00:33:08.134 [2024-07-27 02:32:35.870671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.134 [2024-07-27 02:32:35.870696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.870895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.870921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.871115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.871144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.871340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.871365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.871515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.871540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.871696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.871859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.871884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.872084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.872110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.872287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.872312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.872497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.872523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.872682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.872709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.872897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.872925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.873142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.873168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.873358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.873385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.873558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.873583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.873737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.873762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.873945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.873973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.874156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.874182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.874360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.874385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.874574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.874602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.874767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.874795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.874988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.875191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.875387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.875565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.875766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.875964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.875993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.876199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.876225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.876399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.876424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.876559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.876585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.876762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.876787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.876935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.876961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.877197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.877223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.877399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.877424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.877597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.135 [2024-07-27 02:32:35.877623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.135 qpair failed and we were unable to recover it. 00:33:08.135 [2024-07-27 02:32:35.877806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.877834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.878057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.878088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.878291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.878316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.878529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.878554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.878707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.878732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.878887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.878913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.879066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.879091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.879295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.879320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.879509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.879534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.879704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.879732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.879957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.879985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.880189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.880217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.880387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.880652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.880677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.880994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.881049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.881255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.881280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.881455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.881480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.881636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.881661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.881862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.881895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.882104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.882130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.882316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.882341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.882513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.882538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.882690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.882725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.882930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.882956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.883111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.883138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.883315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.883341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.883490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.883516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.883703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.883731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.883907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.883932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.884134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.136 [2024-07-27 02:32:35.884160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.136 qpair failed and we were unable to recover it. 00:33:08.136 [2024-07-27 02:32:35.884307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.884332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.884483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.884508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.884682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.884708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.884881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.884906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.885127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.885359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.885384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.885594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.885619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.885820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.885845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.886018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.886044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.886190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.886215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.886367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.886585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.886613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.886801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.886829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.887050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.887259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.887434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.887644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.887846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.887988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.888013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.888202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.888228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.888458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.888486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.888663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.888688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.888861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.888887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.889065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.889110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.889294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.889319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.889522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.889547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.889781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.889835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.890030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.890222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.890248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.890452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.890481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.890675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.890701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.890873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.890898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.891072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.891099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.891334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.891362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.891573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.137 [2024-07-27 02:32:35.891598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.137 qpair failed and we were unable to recover it. 00:33:08.137 [2024-07-27 02:32:35.891774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.891800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.891990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.892018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.892213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.892238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.892404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.892432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.892587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.892615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.892808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.892833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.893031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.893064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.893255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.893282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.893479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.893504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.893703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.893731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.893956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.893983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.894180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.894206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.894374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.894403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.894604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.894629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.894830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.894855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.895086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.895128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.895281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.895306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.895465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.895673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.895697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.895898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.895926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.896140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.896165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.896329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.896358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.896559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.896584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.896758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.896783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.896973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.896998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.897217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.897244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.897444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.897468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.897648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.897673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.897875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.897900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.898134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.898160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.898334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.898362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.898580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.898605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.898779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.898804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.898980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.899005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.899202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.138 [2024-07-27 02:32:35.899231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.138 qpair failed and we were unable to recover it. 00:33:08.138 [2024-07-27 02:32:35.899437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.899618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.899644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.899822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.899847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.900031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.900056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.900278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.900306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.900499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.900527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.900686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.900711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.900883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.900910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.901115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.901141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.901321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.901347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.901520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.901554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.901756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.901781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.901937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.901963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.902154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.902189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.902383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.902411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.902634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.902660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.902853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.902881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.903081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.903107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.903306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.903331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.903527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.903555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.903747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.903775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.903934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.903959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.904149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.904175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.904326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.904352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.904524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.904549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.904716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.904744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.904906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.904933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.905114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.905140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.905330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.905358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.905558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.905585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.905745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.905771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.905962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.906188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.906216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.906389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.906415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.906589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.906614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.139 qpair failed and we were unable to recover it. 00:33:08.139 [2024-07-27 02:32:35.906789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.139 [2024-07-27 02:32:35.906817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.906984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.907009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.907250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.907279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.907494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.907519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.907693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.907718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.907906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.907938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.908133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.908159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.908329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.908354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.908575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.908603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.908796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.908824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.909024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.909049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.909236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.909264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.909437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.909465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.909687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.909712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.909889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.909917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.910114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.910143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.910366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.910392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.910564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.910588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.910775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.910803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.910974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.910999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.911181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.911209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.911370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.911398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.911592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.911617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.911774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.911799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.911995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.912023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.912204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.912229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.912416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.912444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.912662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.912690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.912901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.912926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.913135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.913178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.913347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.913375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.913562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.913587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.913766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.913791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.913996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.914024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.140 [2024-07-27 02:32:35.914216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.140 [2024-07-27 02:32:35.914241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.140 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.914442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.914467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.914685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.914713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.914932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.914957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.915169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.915198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.915387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.915415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.915592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.915617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.915798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.915825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.915967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.916008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.916187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.916211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.916379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.916407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.916602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.916629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.916799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.916825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.917022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.917050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.917253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.917278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.917458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.917483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.917648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.917676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.917861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.917889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.918071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.918097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.918291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.918319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.918492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.918522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.918719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.918745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.918939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.918967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.919141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.919170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.919365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.919390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.919594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.919622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.919788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.919817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.920012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.920037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.920187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.920213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.920409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.920437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.920613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.141 [2024-07-27 02:32:35.920638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.141 qpair failed and we were unable to recover it. 00:33:08.141 [2024-07-27 02:32:35.920864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.920892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.921075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.921104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.921304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.921329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.921521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.921549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.921741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.921769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.921936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.921960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.922115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.922141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.922321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.922346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.922522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.922551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.922748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.922776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.922970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.923162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.923188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.923362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.923390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.923601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.923629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.923826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.923851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.924057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.924090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.924285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.924313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.924512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.924537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.924731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.924759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.924951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.924979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.925181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.925206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.925380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.925405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.925614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.925642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.925815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.925840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.926030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.926064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.926257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.926282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.926469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.926494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.926686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.926714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.926933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.926961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.927133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.927159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.927335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.927360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.927508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.927533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.927705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.927730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.927891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.927918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.928111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.928139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.142 qpair failed and we were unable to recover it. 00:33:08.142 [2024-07-27 02:32:35.928338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.142 [2024-07-27 02:32:35.928368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.928558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.928588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.928793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.928821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.929009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.929034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.929233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.929259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.929412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.929437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.929579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.929604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.929777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.929820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.930015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.930043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.930262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.930287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.930496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.930524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.930693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.930722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.930910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.930939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.931117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.931142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.931318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.931363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.931597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.931622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.931784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.931812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.932041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.932071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.932253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.932279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.932483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.932511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.932681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.932709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.932906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.932931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.933078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.933104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.933323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.933351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.933563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.933589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.933739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.933764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.933989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.934017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.934211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.934240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.934402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.934430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.934648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.934677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.934843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.934870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.935069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.935098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.935259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.935287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.935492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.935518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.935743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.935772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.143 qpair failed and we were unable to recover it. 00:33:08.143 [2024-07-27 02:32:35.935941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.143 [2024-07-27 02:32:35.935968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.936140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.936166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.936363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.936392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.936591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.936616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.936771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.936796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.936960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.936988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.937211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.937256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.937501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.937529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.937708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.937737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.937960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.938014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.938224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.938251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.938451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.938486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.938675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.938729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.939084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.939134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.939312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.939339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.939556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.939586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.939793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.939819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.940023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.940053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.940276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.940303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.940489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.940524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.940751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.940780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.940969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.940997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.941195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.941223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.941453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.941480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.941803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.941855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.942051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.942084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.942283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.942312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.942477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.942507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.942711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.942737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.942915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.942948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.943159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.943186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.943397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.943423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.943621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.943650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.943823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.943852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.944074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.144 [2024-07-27 02:32:35.944101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.144 qpair failed and we were unable to recover it. 00:33:08.144 [2024-07-27 02:32:35.944299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.944328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.944502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.944528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.944712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.944740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.944889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.944914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.945098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.945126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.945329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.945355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.945568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.945619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.945814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.945842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.946025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.946051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.946232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.946258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.946443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.946469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.946642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.946682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.946921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.946965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.947160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.947187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.947368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.947394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.947584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.947628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.947919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.947969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.948181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.948208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.948416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.948662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.948706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.949005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.949065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.949244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.949270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.949463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.949490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.949676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.949719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.949903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.949934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.950153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.950383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.950426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.950632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.950675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.950850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.950876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.951035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.951067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.951246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.951288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.951483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.951526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.951747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.951774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.951990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.952016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.145 [2024-07-27 02:32:35.952235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.145 [2024-07-27 02:32:35.952280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.145 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.952556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.952785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.952828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.953012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.953039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.953299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.953343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.953519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.953551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.953775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.953805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.953978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.954004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.954220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.954247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.954424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.954451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.954700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.954729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.954924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.954952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.955131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.955157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.955359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.955384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.955613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.955643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.955848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.955877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.956109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.956136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.956316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.956361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.956567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.956600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.956898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.956948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.957157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.957187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.957356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.957388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.957589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.957816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.957845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.958012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.958038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.958251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.958277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.958488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.958517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.958723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.958764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.959025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.959054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.959259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.146 [2024-07-27 02:32:35.959284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.146 qpair failed and we were unable to recover it. 00:33:08.146 [2024-07-27 02:32:35.959513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.959553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.959816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.960054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.960122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.960303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.960348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.960566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.960595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.960758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.960788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.961008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.961038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.961289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.961329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.961506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.961551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.961768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.961811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.962020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.962046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.962228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.962254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.962453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.962500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.962767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.962817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.963005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.963032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.963191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.963218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.963395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.963437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.963664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.963710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.963902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.963931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.964149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.964175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.964384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.964412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.964703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.964745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.964937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.964965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.965142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.965168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.965369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.965416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.965675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.965725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.965904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.965931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.966153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.966196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.966428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.966472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.966691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.966734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.966917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.966943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.967169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.967212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.967413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.967457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.967725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.967754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.967974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.147 [2024-07-27 02:32:35.968000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.147 qpair failed and we were unable to recover it. 00:33:08.147 [2024-07-27 02:32:35.968210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.968254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.968451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.968494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.968722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.968767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.968921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.968948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.969149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.969193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.969380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.969428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.969633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.969677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.969890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.969916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.970170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.970199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.970415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.970460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.970660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.970688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.970882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.970909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.971110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.971153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.971361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.971404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.971651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.971702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.971879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.971904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.972087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.972114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.972385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.972413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.972660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.972704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.972883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.972909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.973135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.973178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.973377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.973405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.973647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.973690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.973869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.973894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.974074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.974101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.974297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.974341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.974537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.974566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.974782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.974825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.974998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.975024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.975235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.975278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.975511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.975554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.975761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.975804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.976015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.976041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.976254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.976282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.148 [2024-07-27 02:32:35.976496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.148 [2024-07-27 02:32:35.976524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.148 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.976765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.976809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.976987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.977013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.977253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.977297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.977509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.977536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.977766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.977809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.977955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.977981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.978141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.978167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.978362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.978405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.978637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.978679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.978860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.978886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.979039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.979075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.979281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.979310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.979523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.979565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.979793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.979836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.980012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.980038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.980240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.980283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.980500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.980542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.980773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.980817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.980995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.981021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.981200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.981244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.981438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.981467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.981667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.981710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.981916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.981942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.982142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.982187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.982379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.982405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.982587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.982631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.982894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.982920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.983143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.983188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.983360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.983403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.983635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.983678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.983853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.983878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.984024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.984050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.984290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.984333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.984506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.984549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.984818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.149 [2024-07-27 02:32:35.984861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.149 qpair failed and we were unable to recover it. 00:33:08.149 [2024-07-27 02:32:35.985040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.985071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.985336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.985379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.985597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.985840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.985869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.986085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.986112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.986313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.986356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.986560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.986603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.986798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.986844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.987053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.987084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.987283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.987326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.987510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.987536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.987720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.987749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.987921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.987948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.988181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.988226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.988466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.988510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.988684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.988728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.988909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.988936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.989130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.989174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.989369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.989412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.989615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.989657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.989834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.989861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.990009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.990036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.990252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.990296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.990487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.990515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.990712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.990760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.990936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.990961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.991158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.991202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.991420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.991461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.991656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.991700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.991883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.991909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.992082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.992109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.992315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.992360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.992562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.992605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.992846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.992891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.150 [2024-07-27 02:32:35.993097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.150 [2024-07-27 02:32:35.993123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.150 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.993301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.993344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.993521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.993565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.993835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.993864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.994087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.994113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.994316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.994359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.994565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.994608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.994834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.994877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.995087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.995118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.995296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.995339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.995537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.995581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.995796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.995823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.995998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.996024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.996234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.996278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.996504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.996547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.996762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.996788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.996992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.997018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.997230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.997274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.997543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.997587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.997738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.997765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.997966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.997992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.998199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.998243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.998485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.998527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.998760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.998803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.999009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.999035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.999213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.999256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.999436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.999481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.999686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.999729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:35.999887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:35.999913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:36.000130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.151 [2024-07-27 02:32:36.000175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.151 qpair failed and we were unable to recover it. 00:33:08.151 [2024-07-27 02:32:36.000378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.000420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.000691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.000735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.000934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.000959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.001183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.001402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.001429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.001632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.001676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.001851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.001877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.002051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.002082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.002262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.002304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.002503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.002531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.002748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.002791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.002971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.002998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.003176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.003219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.003422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.003467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.003704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.003747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.003927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.003953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.004182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.004227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.004433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.004475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.004742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.004788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.004931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.004957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.005127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.005173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.005411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.005453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.005653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.005695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.005869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.005895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.006082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.006108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.006262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.006289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.006472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.006516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.152 qpair failed and we were unable to recover it. 00:33:08.152 [2024-07-27 02:32:36.006720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.152 [2024-07-27 02:32:36.006764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.006940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.006965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.007125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.007168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.007339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.007551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.007594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.007797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.007842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.008044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.008077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.008250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.008293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.008464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.008509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.008676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.008720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.008921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.008947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.009117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.009147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.009391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.009434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.009630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.009673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.009875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.009901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.010077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.010103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.010331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.010374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.010544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.010588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.010793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.010836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.011018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.011044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.011247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.011290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.011485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.011528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.011762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.011806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.011989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.012015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.012243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.012288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.012503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.012546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.012718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.012765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.012939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.012965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.013162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.013206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.013371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.013414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.013588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.013894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.013924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.014148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.014191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.014369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.014412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.014624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.014668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.153 qpair failed and we were unable to recover it. 00:33:08.153 [2024-07-27 02:32:36.014875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.153 [2024-07-27 02:32:36.014901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.015081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.015108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.015266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.015308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.015478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.015522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.015716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.015746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.015940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.015966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.016169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.016213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.016443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.016486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.016684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.016727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.016905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.016931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.017130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.017175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.017380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.017422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.017624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.017668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.017871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.017898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.018074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.018101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.018312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.018341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.018561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.018616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.018795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.018838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.018995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.019563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.019592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.019820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.019865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.020041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.020082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.020265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.020291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.020525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.020568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.020771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.020821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.021004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.021029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.021239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.021266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.021437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.021479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.021686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.021729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.021931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.021957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.022219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.022245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.022428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.022472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.022702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.022745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.022924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.022950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.023183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.023227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.154 [2024-07-27 02:32:36.023399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.154 [2024-07-27 02:32:36.023442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.154 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.023671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.023719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.023878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.023904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.024078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.024104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.024308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.024337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.024533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.024579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.024789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.024831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.025007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.025033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.025238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.025282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.025553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.025597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.025773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.025817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.026008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.026034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.026224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.026268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.026442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.026485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.026687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.026731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.026881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.026907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.027084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.027121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.027297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.027341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.027527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.027570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.027795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.027824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.027996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.028024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.028225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.028271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.028507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.028549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.028753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.028796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.029000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.029026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.029226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.029270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.029466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.029508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.029688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.029731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.029936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.029979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.030170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.030197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.030377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.030420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.030650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.030694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.030868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.030894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.031079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.031106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.031335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.031379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.155 [2024-07-27 02:32:36.031557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.155 [2024-07-27 02:32:36.031583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.155 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.031733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.031759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.031963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.031988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.032204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.032247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.032424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.032472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.032665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.032708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.032880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.032914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.033151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.033196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.033367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.033411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.033641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.033684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.033842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.033868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.034073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.034100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.034284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.034328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.034504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.034760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.034804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.034986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.035013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.035209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.035256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.035460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.035503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.035705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.035750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.035923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.035950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.036171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.036215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.036442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.036487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.036654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.036698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.036894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.036920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.037072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.037296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.037344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.037574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.037618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.037792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.037818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.037974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.038000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.038179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.038223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.038400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.038444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.038671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.038894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.038920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.039121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.156 [2024-07-27 02:32:36.039165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.156 qpair failed and we were unable to recover it. 00:33:08.156 [2024-07-27 02:32:36.039362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.039406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.039579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.039783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.039809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.039963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.039988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.040194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.040224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.040472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.040515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.040688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.040916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.040943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.041135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.041185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.041368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.041412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.041613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.041657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.041861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.041888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.042040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.042078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.042276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.042322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.042528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.042572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.042764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.042806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.042986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.043012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.043230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.043274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.043450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.043494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.043695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.043738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.043895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.043920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.044110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.044140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.044337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.044380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.044572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.044615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.044790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.044816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.044989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.045015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.045200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.045243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.045423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.045466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.157 [2024-07-27 02:32:36.045657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.157 [2024-07-27 02:32:36.045700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.157 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.045846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.045873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.046050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.046082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.046275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.046317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.046538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.046581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.046782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.046826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.047000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.047026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.047237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.047491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.047534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.047763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.047806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.047987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.048014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.048205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.048232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.048466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.048508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.048708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.048737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.048935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.048961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.049123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.049154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.049368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.049410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.049617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.049660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.049864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.049890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.050099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.050126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.050317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.050361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.050590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.050632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.050862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.051053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.051093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.051267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.051298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.051510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.051553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.051758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.051801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.051979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.052005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.052211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.052255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.052436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.052479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.052685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.052730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.052886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.052913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.053104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.053133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.053326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.053354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.053605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.053648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.158 [2024-07-27 02:32:36.053798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.158 [2024-07-27 02:32:36.053823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.158 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.053999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.054025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.054238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.054267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.054459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.054502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.054700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.054743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.054901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.054927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.055129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.055172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.055402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.055445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.055610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.055653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.055802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.055828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.055999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.056024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.056176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.056203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.056404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.056447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.056655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.056698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.056857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.056884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.057086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.057113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.057366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.057410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.057582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.057611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.057806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.057835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.058030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.058055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.058217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.058242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.058472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.058501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.058911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.058978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.059206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.059232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.059458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.059486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.059711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.059757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.059936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.059961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.060140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.060166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.060343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.060368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.060563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.060762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.060790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.060983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.061008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.061161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.061187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.061387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.061415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.061631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.061659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.061860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.061888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.159 [2024-07-27 02:32:36.062065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.159 [2024-07-27 02:32:36.062108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.159 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.062285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.062310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.062506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.062534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.062732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.062761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.062953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.062982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.063161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.063187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.063386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.063415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.063650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.063677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.063843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.063871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.064104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.064280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.064305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.064500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.064529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.064722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.064750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.064969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.064998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.065193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.065219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.065407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.065436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.065650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.065923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.065951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.066159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.066185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.066366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.066546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.066572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.066782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.067119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.067145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.067292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.067317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.067533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.067559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.067813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.067863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.068023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.068051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.068230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.068256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.068433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.068458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.068684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.068712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.068943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.068971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.069162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.069188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.069383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.069413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.069617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.069646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.069989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.070041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.070223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.070249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.160 qpair failed and we were unable to recover it. 00:33:08.160 [2024-07-27 02:32:36.070444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.160 [2024-07-27 02:32:36.070472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.070700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.070728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.070931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.070960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.071167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.071193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.071339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.071365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.071553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.071581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.071767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.071795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.071962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.071988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.072166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.072192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.072394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.072422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.072703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.072758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.072954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.072982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.073207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.073233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.073405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.073447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.073657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.073685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.073904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.073932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.074108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.074133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.074310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.074351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.074570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.074598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.074874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.074923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.075151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.075177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.075373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.075402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.075592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.075620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.075788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.075816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.076006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.076034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.076272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.076302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.076509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.076538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.076707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.076735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.076906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.076947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.077151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.077177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.077371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.077399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.077624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.077649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.077872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.077900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.078088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.078314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.161 [2024-07-27 02:32:36.078339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.161 qpair failed and we were unable to recover it. 00:33:08.161 [2024-07-27 02:32:36.078494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.078519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.078672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.078872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.078897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.079098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.079128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.079312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.079338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.079487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.079512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.079730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.079758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.079948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.079977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.080169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.080194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.080399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.080428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.080604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.080630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.080774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.080799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.080998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.081024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.081284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.081323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.081530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.081556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.081777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.081803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.081944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.081986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.082185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.082217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.082445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.082473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.082759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.082805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.082968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.082996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.083224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.083250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.083425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.083453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.083678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.083703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.083914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.083944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.084141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.084170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.084368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.084395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.084567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.084595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.084789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.084817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.084986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.085011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.085178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.162 [2024-07-27 02:32:36.085208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.162 qpair failed and we were unable to recover it. 00:33:08.162 [2024-07-27 02:32:36.085413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.085440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.085595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.085620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.085791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.085817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.086035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.086069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.086245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.086271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.086468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.086496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.086697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.086725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.086891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.086917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.087110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.087156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.087327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.087353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.087562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.087587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.087783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.087812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.087998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.088026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.088207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.088233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.088453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.088481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.088755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.088802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.089004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.089030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.089189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.089215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.089415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.089443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.089617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.089642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.089832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.089860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.090053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.090107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.090288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.090314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.090468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.090494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.090647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.090673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.090820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.090846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.091064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.091112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.091294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.091320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.091530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.091556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.091715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.091741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.091928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.091956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.092126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.092152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.092300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.092327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.163 qpair failed and we were unable to recover it. 00:33:08.163 [2024-07-27 02:32:36.092553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.163 [2024-07-27 02:32:36.092582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.092818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.092844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.093039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.093073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.093280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.093305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.093454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.093697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.093725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.093895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.093923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.094098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.094124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.094297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.094338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.094511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.094539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.094738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.094765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.094965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.094994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.095198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.095224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.095438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.095464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.095655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.095683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.095852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.095880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.096112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.096138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.096289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.096315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.096517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.096545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.096712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.096737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.096910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.096936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.097083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.097110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.097316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.097342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.097563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.097591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.097852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.097882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.098075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.098100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.098256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.098282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.098457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.098483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.098661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.098686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.098864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.098889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.099067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.099093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.099271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.099297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.099497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.099527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.099820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.099875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.100102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.100128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.164 qpair failed and we were unable to recover it. 00:33:08.164 [2024-07-27 02:32:36.100280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.164 [2024-07-27 02:32:36.100305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.100500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.100525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.100718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.100743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.100945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.100973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.101175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.101203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.101378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.101403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.101581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.101606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.101770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.101800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.102022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.102048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.102260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.102289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.102510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.102539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.102744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.102769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.102950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.102978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.103170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.103200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.103425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.103450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.103619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.103649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.103870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.103899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.104101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.104324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.104353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.104571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.104600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.104820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.104845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.105000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.105026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.105206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.105231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.105406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.105432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.105631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.105659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.105890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.105923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.106131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.106157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.106351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.106379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.106568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.106596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.106821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.106846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.107051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.107085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.107275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.107303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.107497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.107522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.107697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.107725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.107896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.107924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.165 [2024-07-27 02:32:36.108144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.165 [2024-07-27 02:32:36.108170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.165 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.108344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.108372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.108530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.108558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.108759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.108784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.108992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.109020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.109203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.109230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.109413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.109439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.109590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.109616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.109833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.109861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.110091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.110117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.110289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.110314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.110519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.110547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.110724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.110750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.110962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.110990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.111195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.111221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.111399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.111424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.111600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.111625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.111861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.111890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.112071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.112097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.112291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.112317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.112526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.112554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.112719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.112744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.112965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.112994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.113191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.113218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.113400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.113426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.113625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.113879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.113905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.114079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.114104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.114334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.114360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.114535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.114561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.114704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.114733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.114940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.114968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.115141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.115169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.115360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.115387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.115602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.166 [2024-07-27 02:32:36.115631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.166 qpair failed and we were unable to recover it. 00:33:08.166 [2024-07-27 02:32:36.115848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.115876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.116047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.116089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.116263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.116288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.116442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.116467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.116642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.116667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.116863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.116891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.117053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.117087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.117262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.117287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.117504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.117532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.117711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.117738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.117912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.117937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.118089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.118116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.118267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.118293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.118491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.118517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.118723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.118751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.118955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.118983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.119155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.119181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.119368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.119397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.119553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.119581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.119777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.119803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.119992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.120020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.120221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.120250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.120447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.120474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.120675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.120703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.120890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.120919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.121110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.121137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.121285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.121311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.121535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.121564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.121774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.121799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.121966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.167 [2024-07-27 02:32:36.121996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.167 qpair failed and we were unable to recover it. 00:33:08.167 [2024-07-27 02:32:36.122211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.122240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.122406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.122432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.122615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.122643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.122810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.122838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.123036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.123065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.123270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.123303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.123502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.123531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.123742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.123767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.123978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.124006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.124233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.124259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.124450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.124475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.124672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.124700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.124897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.124923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.125175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.125411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.125436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.125591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.125617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.125794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.125821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.125965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.125990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.126165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.126191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.126384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.126409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.126608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.126634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.126874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.126902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.127138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.127163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.127316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.127342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.127531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.127560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.127760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.127786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.127942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.127969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.128194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.128222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.128391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.128417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.128613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.128642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.128862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.128891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.129069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.129094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.129277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.129302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.129509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.129538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.129714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.129740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.168 qpair failed and we were unable to recover it. 00:33:08.168 [2024-07-27 02:32:36.129913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.168 [2024-07-27 02:32:36.129938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.130144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.130172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.130390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.130416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.130619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.130648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.130844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.130872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.131069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.131095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.131292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.131321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.131511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.131539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.131767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.131793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.131948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.131974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.132173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.132206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.132412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.132436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.132636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.132665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.132832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.132861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.133029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.133055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.133294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.133336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.133584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.133611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.133824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.133860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.134035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.134089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.134307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.134338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.134552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.134579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.134778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.134815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.135028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.135064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.135266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.135302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.135539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.135577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.135825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.135854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.136041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.136081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.136289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.136319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.136528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.136556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.136735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.136762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.136965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.137007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.137234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.137265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.137510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.137538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.137714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.137756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.137977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.138004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.169 [2024-07-27 02:32:36.138171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.169 [2024-07-27 02:32:36.138208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.169 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.138364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.138392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.138619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.138648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.138863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.139077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.139115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.139345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.139372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.139594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.139621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.139831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.139862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.140089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.140120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.140330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.140358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.140560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.140607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.140846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.140874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.141053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.141091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.141273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.141303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.141473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.141502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.141666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.141699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.141909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.141938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.142152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.142178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.142329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.142359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.142563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.142594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.142829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.142868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.143044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.143079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.143291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.143320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.143519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.143553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.143722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.143749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.143935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.143965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.144165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.144193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.144347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.144375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.144664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.144716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.144959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.144989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.145225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.145253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.145476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.145507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.145679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.145718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.145899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.145926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.146073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.146281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.170 [2024-07-27 02:32:36.146314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.170 qpair failed and we were unable to recover it. 00:33:08.170 [2024-07-27 02:32:36.146525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.146551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.146828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.146881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.147102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.147369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.147406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.147733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.147799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.148029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.148230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.148257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.148414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.148441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.148643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.148670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.148887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.148914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.149157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.149192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.149392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.149423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.149597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.149624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.149876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.149928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.150119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.150353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.150390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.150594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.150624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.150850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.150878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.151051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.151094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.151273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.151305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.151547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.151577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.151806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.151834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.152073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.152113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.152278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.152307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.152484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.152848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.152901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.153144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.153172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.153342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.153370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.153602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.153650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.153879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.153912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.154123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.154151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.154322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.154354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.171 [2024-07-27 02:32:36.154550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.171 [2024-07-27 02:32:36.154580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.171 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.154762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.155000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.155043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.155282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.155310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.155512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.155539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.155874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.155924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.156155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.156188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.156424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.156451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.156645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.156677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.156900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.156930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.157118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.157146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.157296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.157324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.157517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.157543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.157732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.157760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.157966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.157996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.158209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.158237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.158394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.158421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.158705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.158753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.158991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.159021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.159263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.159290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.159514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.159544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.159747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.159777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.159998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.160030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.160224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.160458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.160489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.160691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.160901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.160929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.161173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.161211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.161409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.161448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.161833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.161890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.162110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.162140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.162318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.162356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.162587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.172 [2024-07-27 02:32:36.162629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.172 qpair failed and we were unable to recover it. 00:33:08.172 [2024-07-27 02:32:36.162825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.162856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.163024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.163052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.163283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.163313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.163536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.163566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.163782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.163813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.164024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.164054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.164265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.164292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.164475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.164503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.164796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.164850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.165078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.165108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.165302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.165329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.165496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.165526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.165724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.165750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.165933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.165963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.166146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.166185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.166422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.166452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.166637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.166665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.166859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.166889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.167100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.167132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.167304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.167331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.167600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.167639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.167877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.167909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.168083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.168110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.168312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.168342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.168546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.168577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.168744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.168771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.168946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.168972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.169150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.169178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.169403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.169438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.169588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.169615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.169761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.169988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.170017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.170231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.170258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.170492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.170522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.173 qpair failed and we were unable to recover it. 00:33:08.173 [2024-07-27 02:32:36.170704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.173 [2024-07-27 02:32:36.170739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.170932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.170965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.171178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.171208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.171391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.171434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.171646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.171676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.171865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.171895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.172111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.172138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.172350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.172383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.172626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.172653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.172828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.172855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.173055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.173124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.173343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.173374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.173585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.173628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.173832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.173863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.174067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.174098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.174333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.174362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.174550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.174580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.174796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.174823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.174988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.175026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.175239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.175278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.175435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.175464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.175747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.175778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.176076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.176124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.176373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.176404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.176702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.176760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.177141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.177171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.177328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.177370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.177681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.177709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.177953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.178008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.178185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.178214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.178514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.178573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.178891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.178943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.179153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.179363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.179391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.179577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.179605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.174 [2024-07-27 02:32:36.179835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.174 [2024-07-27 02:32:36.179877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.174 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.180083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.180128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.180313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.180341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.180619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.180650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.180889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.180935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.181192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.181225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.181455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.181485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.181716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.181769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.181990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.182034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.182209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.182246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.182456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.182486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.182731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.182759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.182988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.183018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.183258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.183288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.183465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.183491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.183688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.183721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.183924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.183958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.184125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.184327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.184354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.184644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.184704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.184995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.185046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.185276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.185303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.185687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.185726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.186077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.186136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.186367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.186393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.186658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.186684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.186879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.186905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.187128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.187170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.187372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.187402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.187605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.187632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.187853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.187879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.188036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.188086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.188333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.188366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.188557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.188586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.188805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.175 [2024-07-27 02:32:36.188834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.175 qpair failed and we were unable to recover it. 00:33:08.175 [2024-07-27 02:32:36.189102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.189130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.189357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.189384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.189558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.189584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.189720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.189745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.189906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.189933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.190145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.190188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.190381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.190407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.190617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.190658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.190833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.190859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.191045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.191080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.191260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.191294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.191495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.191757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.191783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.191989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.192019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.192253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.192282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.192491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.192533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.192741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.192770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.192966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.192996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.193192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.193219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.193413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.193443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.193615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.193646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.193864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.193890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.194074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.194101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.194309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.194338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.194540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.194566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.194785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.194811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.195028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.195085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.195315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.195342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.176 qpair failed and we were unable to recover it. 00:33:08.176 [2024-07-27 02:32:36.195572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.176 [2024-07-27 02:32:36.195601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.195824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.195872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.196036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.196092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.196285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.196311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.196579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.196605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.196883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.196909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.197139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.197167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.197394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.197424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.197656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.197683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.197918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.197944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.198157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.198200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.198397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.198439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.198671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.198701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.198898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.198927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.199097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.199126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.199412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.199442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.199624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.199649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.199821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.199847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.200056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.200106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.200288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.177 [2024-07-27 02:32:36.200314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.177 qpair failed and we were unable to recover it. 00:33:08.177 [2024-07-27 02:32:36.200528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.200817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.200862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.201045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.201087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.201258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.201285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.201493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.201520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.201748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.201793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.202153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.202180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.202405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.202431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.202693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.202737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.203124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.203151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.203418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.203444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.203665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.203709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.203943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.203989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.204180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.204208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.204415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.204458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.204670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.204699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.204968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.204995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.205191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.205451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.205493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.205708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.205752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.205937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.205964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.206195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.206239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.206449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.206490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.206713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.206756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.206934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.206961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.207186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.207230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.178 [2024-07-27 02:32:36.207404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.178 [2024-07-27 02:32:36.207448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.178 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.207652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.207695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.207874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.207900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.208172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.208216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.208410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.208454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.208624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.208656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.208820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.208850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.209080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.209109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.209299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.209329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.209527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.209556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.209748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.209778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.209966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.209996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.210208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.210235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.210394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.210436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.210628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.210658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.210849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.210879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.211048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.211101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.211307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.211335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.211526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.211554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.211822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.211866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.212074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.212117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.212332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.212372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.212738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.212796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.212983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.213024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.213223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.213251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.213405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.213433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.213682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.213726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.213916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.179 [2024-07-27 02:32:36.213943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.179 qpair failed and we were unable to recover it. 00:33:08.179 [2024-07-27 02:32:36.214144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.214171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.214341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.214385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.214595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.214639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.214827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.214854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.215076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.215104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.215298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.215342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.215526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.215569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.215753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.215798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.216002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.216028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.216244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.216274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.216507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.216534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.216781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.216825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.217005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.217033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.217251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.217280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.217480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.217524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.217863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.217913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.218166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.218211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.218459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.218503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.218931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.218983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.219192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.219237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.219451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.219494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.219748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.219797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.219947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.220177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.220223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.220451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.220495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.220717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.220761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.180 qpair failed and we were unable to recover it. 00:33:08.180 [2024-07-27 02:32:36.220971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.180 [2024-07-27 02:32:36.220998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.221203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.221248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.221469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.221513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.221752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.221796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.221974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.222001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.222269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.222313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.222553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.222597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.222826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.222870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.223114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.223159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.223400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.223443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.223650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.223695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.223899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.223943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.224130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.224157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.224368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.224413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.224594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.224637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.224847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.224890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.225130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.225175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.225408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.225452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.225645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.225689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.225873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.225899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.226053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.226087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.226289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.226333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.226572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.226617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.226818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.226863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.227076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.227104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.227304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.227349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.227551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.181 [2024-07-27 02:32:36.227792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.181 [2024-07-27 02:32:36.227837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.181 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.227992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.228019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.228223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.228272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.228487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.228531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.228745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.228789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.228985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.229012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.229211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.229255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.229459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.229503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.229701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.229745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.229951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.229978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.230172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.230218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.230420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.230463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.230670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.230924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.230950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.231179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.231223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.231386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.231430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.231671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.231716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.231894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.231921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.232081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.232109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.232337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.232382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.232610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.232653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.232887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.232932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.233084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.233111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.233340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.233383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.233576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.233621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.233799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.233825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.234021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.234049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.234284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.234558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.234874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.234925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.182 [2024-07-27 02:32:36.235148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.182 [2024-07-27 02:32:36.235182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.182 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.235414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.235442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.235651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.235695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.235846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.235874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.236049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.236089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.236324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.236368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.236576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.236619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.236794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.236822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.237000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.237026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.237236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.237281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.237464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.237513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.237744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.237789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.237998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.238204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.238248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.238456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.238499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.238721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.238764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.238917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.238944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.239176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.239221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.239405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.239449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.239665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.183 [2024-07-27 02:32:36.239879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.183 [2024-07-27 02:32:36.239906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.183 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.240075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.240102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.240297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.240338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.240580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.240624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.240829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.240873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.241074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.241101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.241287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.241333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.241549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.241796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.241841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.242068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.242275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.242323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.242574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.242627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.242836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.243072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.243099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.243254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.243280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.243497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.243542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.243775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.243818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.244020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.244046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.244229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.244256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.244480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.244527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.244725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.244769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.244973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.244999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.245184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.245212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.184 [2024-07-27 02:32:36.245413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.184 [2024-07-27 02:32:36.245457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.184 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.245701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.245745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.245888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.245914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.246105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.246134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.246349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.246397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.246598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.246642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.246795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.246821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.247020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.247046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.248126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.248157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.248369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.248419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.248646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.248691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.248837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.248863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.249065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.249092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.249270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.249314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.249514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.249559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.249765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.249808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.249983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.250009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.250182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.250227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.250403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.250451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.250617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.250645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.250823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.250850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.251048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.251082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.251286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.251329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.251539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.251583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.251764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.251808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.251987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.252013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.465 [2024-07-27 02:32:36.252231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.465 [2024-07-27 02:32:36.252259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.465 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.252465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.252510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.252705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.252749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.252897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.252924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.253102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.253129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.253333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.253378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.253576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.253626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.253835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.253880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.254064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.254092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.254262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.254307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.254560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.254604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.254789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.254833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.255034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.255072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.255302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.255346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.255569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.255611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.255849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.255896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.256071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.256098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.256272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.256298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.256543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.256587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.256790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.256834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.257066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.257093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.257270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.257296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.257531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.257574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.257782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.257829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.258043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.258077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.258281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.258307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.258511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.258555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.258756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.258800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.258977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.259003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.259195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.259222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.259428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.259470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.259672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.259715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.259874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.259900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.260085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.466 [2024-07-27 02:32:36.260112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.466 qpair failed and we were unable to recover it. 00:33:08.466 [2024-07-27 02:32:36.260265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.260291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.260502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.260546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.260722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.260765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.260970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.260997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.261208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.261235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.261467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.261510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.261752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.261795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.261949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.261975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.262136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.262164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.262396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.262439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.262681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.262724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.262902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.262928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.263154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.263198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.263431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.263460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.263689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.263734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.263910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.263936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.264166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.264209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.264390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.264434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.264642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.264686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.264890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.264917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.265110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.265140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.265357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.265401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.265631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.265674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.265851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.265877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.266056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.266087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.266265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.266291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.266520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.266564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.266765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.266811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.267019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.267056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.267217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.267247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.267450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.267494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.267697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.267944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.267970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.268180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.268207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.467 qpair failed and we were unable to recover it. 00:33:08.467 [2024-07-27 02:32:36.268411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.467 [2024-07-27 02:32:36.268441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.268661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.268705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.268913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.268957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.269173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.269217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.269418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.269461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.269662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.269707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.269881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.269907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.270090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.270117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.270344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.270389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.270626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.270668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.270827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.270854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.271035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.271067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.271274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.271319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.271497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.271541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.271706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.271750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.271932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.271958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.272161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.272205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.272405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.272454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.272677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.272860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.272886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.273068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.273094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.273296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.273326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.273556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.273584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.273814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.273858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.274044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.274078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.274255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.274281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.274510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.274538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.274764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.274808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.275007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.275033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.275244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.275271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.275507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.275551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.275759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.275969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.275996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.276199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.276242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.276434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.468 [2024-07-27 02:32:36.276477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.468 qpair failed and we were unable to recover it. 00:33:08.468 [2024-07-27 02:32:36.276702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.276749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.276938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.276964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.277164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.277208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.277436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.277699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.277751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.277900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.277926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.278147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.278191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.278423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.278467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.278686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.278729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.278931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.278957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.279191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.279234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.279415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.279460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.279653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.279696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.279841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.279867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.280046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.280093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.280274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.280301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.280538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.280582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.280783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.280827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.281040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.281073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.281282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.281308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.281515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.281559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.281775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.281817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.282029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.282054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.282242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.282268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.282473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.282516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.282725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.282768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.282947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.282973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.283139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.283166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.283377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.283432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.283659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.283702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.283864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.283907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.284077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.284122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.284293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.284339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.284503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.284548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.284779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.469 [2024-07-27 02:32:36.284829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.469 qpair failed and we were unable to recover it. 00:33:08.469 [2024-07-27 02:32:36.285032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.285063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.285241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.285284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.285519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.285804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.285848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.286047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.286078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.286235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.286267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.286503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.286554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.286754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.286797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.286981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.287009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.287187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.287232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.287470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.287514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.287721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.287764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.287951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.287978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.288159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.288203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.288405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.288449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.288689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.288917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.288943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.289161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.289206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.289412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.289456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.289692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.289736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.289932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.289958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.290135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.290179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.290389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.290417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.290648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.290691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.290892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.290918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.291093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.291120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.291304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.291349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.291556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.470 [2024-07-27 02:32:36.291599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.470 qpair failed and we were unable to recover it. 00:33:08.470 [2024-07-27 02:32:36.291776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.291820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.292004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.292032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.292244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.292288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.292496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.292540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.292753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.292798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.293000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.293026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.293223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.293268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.293500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.293544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.293718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.293762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.293933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.293961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.294159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.294203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.294407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.294451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.294687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.294731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.294915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.294942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.295172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.295216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.295387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.295438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.295682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.295725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.295900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.295931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.296077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.296104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.296307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.296337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.296578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.296621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.296798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.296823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.297004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.297031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.297265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.297309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.297514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.297558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.297759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.297802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.297960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.297986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.298210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.298254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.298422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.298466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.298625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.298667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.298843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.298868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.299033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.299066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.299300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.299329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.299540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.299583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.471 [2024-07-27 02:32:36.299801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.471 [2024-07-27 02:32:36.299844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.471 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.300064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.300091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.300289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.300318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.300544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.300587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.300816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.300860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.301035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.301077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.301283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.301327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.301554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.301597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.301841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.301884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.302044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.302087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.302235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.302261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.302489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.302532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.302736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.302780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.302958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.302985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.303141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.303168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.303401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.303445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.303676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.303719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.303872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.303898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.304105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.304131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.304335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.304377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.304571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.304614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.304817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.304861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.305042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.305081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.305260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.305307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.305519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.305563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.305770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.305814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.305995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.306021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.306234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.306263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.306473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.306502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.306745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.306796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.306954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.306981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.307206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.307251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.307483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.307526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.307763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.307807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.308012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.308038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.308246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.472 [2024-07-27 02:32:36.308290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.472 qpair failed and we were unable to recover it. 00:33:08.472 [2024-07-27 02:32:36.308495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.308538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.308717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.308763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.308932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.308958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.309143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.309170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.309404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.309449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.309702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.309731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.309905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.309936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.310129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.310173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.310346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.310372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.310550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.310578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.310744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.310771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.310972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.310998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.311208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.311252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.311492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.311541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.311754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.311797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.311980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.312008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.312220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.312266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.312457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.312484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.312727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.312771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.312939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.312966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.313141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.313185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.313385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.313428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.313632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.313676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.313852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.313878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.314081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.314108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.314280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.314323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.314507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.314551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.314759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.314991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.315017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.315220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.315264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.315462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.315492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.315701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.315744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.315894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.315920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.316081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.316108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.316310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.473 [2024-07-27 02:32:36.316353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.473 qpair failed and we were unable to recover it. 00:33:08.473 [2024-07-27 02:32:36.316527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.316569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.316791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.316835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.316980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.317007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.317252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.317294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.317560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.317603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.317836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.317879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.318034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.318075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.318320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.318365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.318570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.318612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.318828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.318872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.319056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.319088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.319291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.319335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.319631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.319680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.319918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.319961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.320136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.320420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.320464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.320714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.320758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.320962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.321257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.321300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.321595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.321640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.321792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.321819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.322006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.322032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.322238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.322472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.322501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.322724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.322769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.322983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.323008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.323204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.323248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.323485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.323530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.323754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.323797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.323954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.323981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.324182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.324228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.324524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.324566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.324853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.324900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.325128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.325158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.325381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.474 [2024-07-27 02:32:36.325409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.474 qpair failed and we were unable to recover it. 00:33:08.474 [2024-07-27 02:32:36.325625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.325654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.325872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.325912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.326160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.326204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.326378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.326422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.326625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.326667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.326846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.326871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.327035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.327068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.327250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.327294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.327496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.327538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.327716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.327761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.327917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.328143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.328187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.328394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.328449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.328685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.328734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.328948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.328973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.329160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.329205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.329437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.329482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.329676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.329726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.329912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.329939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.330100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.330129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.330347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.330397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.330609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.330653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.330829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.330856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.331037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.331258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.331302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.331514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.331558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.331758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.331814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.331966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.331993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.332203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.332247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.475 [2024-07-27 02:32:36.332398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.475 [2024-07-27 02:32:36.332425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.475 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.332670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.332713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.332892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.332918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.333109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.333138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.333390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.333433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.333664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.333709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.333885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.333911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.334132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.334176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.334404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.334451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.334651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.334703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.334879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.334905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.335053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.335084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.335250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.335294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.335520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.335573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.335774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.335817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.336000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.336026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.336243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.336288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.336489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.336539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.336734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.336776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.336985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.337012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.337260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.337304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.337512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.337555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.337776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.337820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.338022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.338048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.338234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.338259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.338495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.338539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.338787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.338828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.338991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.339018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.339217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.339264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.339469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.339514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.339713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.339757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.339909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.339936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.340165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.340373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.340416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.340593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.340636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.476 [2024-07-27 02:32:36.340832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.476 [2024-07-27 02:32:36.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.476 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.341091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.341138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.341317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.341348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.341529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.341557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.341786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.341815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.342023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.342067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.342258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.342283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.342459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.342487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.342710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.342739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.342987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.343049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.343279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.343304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.343515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.343754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.343948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.343976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.344195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.344221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.344399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.344429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.344634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.344662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.344886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.344915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.345110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.345138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.345293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.345319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.345533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.345753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.345945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.345973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.346213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.346240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.346384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.346410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.346575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.346603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.346860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.346889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.347087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.347117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.347295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.347321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.347569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.347599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.347799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.347833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.348046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.348088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.348246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.348272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.348442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.348467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.348670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.348699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.348946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.477 [2024-07-27 02:32:36.348975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.477 qpair failed and we were unable to recover it. 00:33:08.477 [2024-07-27 02:32:36.349195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.349221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.349392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.349420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.349579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.349608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.349808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.349852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.350020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.350048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.350237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.350264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.350420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.350449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.350624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.350652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.350874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.350903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.351098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.351125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.351306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.351332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.351598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.351639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.351923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.351953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.355285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.355326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.355600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.355631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.355858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.355888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.356109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.356136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.356319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.356372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.356570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.356603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.356803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.356832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.357031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.357100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.357260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.357285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.357453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.357481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.357761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.357819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.358064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.358090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.358291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.358316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.358569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.358595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.358788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.358986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.359014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.359222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.359248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.359394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.359420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.359643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.359672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.359856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.359885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.360084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.360111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.360281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.360307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.360546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.478 [2024-07-27 02:32:36.360575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.478 qpair failed and we were unable to recover it. 00:33:08.478 [2024-07-27 02:32:36.360804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.360830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.361019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.361045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.361203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.361229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.361432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.361457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.361633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.361662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.361839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.361865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.362068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.362094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.362263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.362291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.362455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.362705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.362731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.362912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.362938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.363138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.363168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.363339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.363365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.363535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.363561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.363759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.363788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.363977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.364002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.364190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.364221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.364457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.364486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.364686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.364713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.364932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.364960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.365164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.365194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.365393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.365419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.365587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.365615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.365820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.365849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.366041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.366073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.366244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.366272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.366474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.366503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.366699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.366725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.366887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.366916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.367148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.367178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.367356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.367382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.367578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.367606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.367770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.367799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.368017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.368043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.368251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.368280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.479 qpair failed and we were unable to recover it. 00:33:08.479 [2024-07-27 02:32:36.368495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.479 [2024-07-27 02:32:36.368521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.368689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.368715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.368947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.368975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.369172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.369201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.369379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.369405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.369562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.369604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.369774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.369803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.370006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.370031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.370187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.370213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.370389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.370414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.370613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.370851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.370876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.371078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.371107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.371273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.371298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.371523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.371551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.371772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.371805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.371985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.372010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.372207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.372233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.372409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.372438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.372662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.372688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.372883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.372912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.373098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.373124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.373296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.373322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.373542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.373571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.373767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.373796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.373987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.374012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.374169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.374195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.374397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.374426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.374647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.374901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.374930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.375128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.480 [2024-07-27 02:32:36.375157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.480 qpair failed and we were unable to recover it. 00:33:08.480 [2024-07-27 02:32:36.375336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.375361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.375516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.375718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.375744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.375917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.375942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.376165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.376194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.376361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.376390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.376592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.376618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.376847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.376876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.377071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.377098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.377249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.377274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.377500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.377529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.377721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.377755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.377957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.377983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.378146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.378172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.378341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.378370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.378556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.378582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.378807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.378835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.379009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.379038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.379223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.379249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.379414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.379442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.379667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.379696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.379864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.379890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.380050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.380096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.380317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.380343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.380519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.380544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.380747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.380777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.380999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.381028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.381242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.381268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.381447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.381473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.381692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.381720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.381892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.381918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.382090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.382116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.382271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.382297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.382493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.382519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.382748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.481 [2024-07-27 02:32:36.382777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.481 qpair failed and we were unable to recover it. 00:33:08.481 [2024-07-27 02:32:36.383015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.383041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.383237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.383263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.383440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.383465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.383674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.383706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.383904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.384145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.384171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.384326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.384353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.384556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.384582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.384786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.384813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.384973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.385000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.385185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.385211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.385392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.385418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.385613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.385642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.385843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.385868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.386072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.386101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.386372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.386400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.386636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.386664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.386889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.386918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.387136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.387162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.387338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.387363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.387562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.387587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.387820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.387849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.388020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.388045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.388236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.388262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.388429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.388455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.388634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.388661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.388861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.388890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.389103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.389132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.389308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.389333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.389530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.389556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.389795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.389820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.389972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.389997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.390156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.390182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.390386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.390414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.390635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.390660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.482 qpair failed and we were unable to recover it. 00:33:08.482 [2024-07-27 02:32:36.390858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.482 [2024-07-27 02:32:36.390886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.391103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.391131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.391325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.391350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.391518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.391547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.391705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.391735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.391956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.391982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.392191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.392219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.392453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.392479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.392659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.392684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.392871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.392897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.393074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.393100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.393250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.393275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.393458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.393486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.393675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.393703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.393901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.393927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.394127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.394156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.394369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.394394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.394594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.394619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.394881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.394909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.395078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.395107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.395300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.395326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.395528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.395556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.395750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.395778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.395943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.395968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.396175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.396204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.396398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.396434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.396633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.396658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.396824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.396854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.397047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.397083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.397280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.397305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.397506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.397534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.397713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.397738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.397945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.397970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.398174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.398202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.398396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.398424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.483 [2024-07-27 02:32:36.398625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.483 [2024-07-27 02:32:36.398651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.483 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.398847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.398879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.399052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.399087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.399297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.399323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.399563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.399591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.399789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.399818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.400018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.400043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.400231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.400260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.400505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.400530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.400725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.400750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.400952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.400980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.401161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.401195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.401396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.401421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.401608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.401636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.401855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.401883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.402081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.402108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.402266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.402292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.402453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.402651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.402676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.402875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.402903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.403096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.403125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.403326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.403351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.403554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.403582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.403763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.403788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.403960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.403985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.404149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.404177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.404374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.404402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.404602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.404628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.404776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.404806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.405025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.405053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.405238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.405264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.405417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.405460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.405655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.405680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.405869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.405894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.406036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.406066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.406208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.406251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.484 qpair failed and we were unable to recover it. 00:33:08.484 [2024-07-27 02:32:36.406422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.484 [2024-07-27 02:32:36.406448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.406625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.406650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.406813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.406842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.407016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.407041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.407249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.407275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.407456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.407484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.407655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.407681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.407876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.407904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.408097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.408126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.408297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.408323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.408510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.408535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.408707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.408732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.408904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.408929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.409073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.409099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.409292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.409320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.409491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.409516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.409673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.409698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.409918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.409947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.410147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.410173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.410327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.410352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.410532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.410557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.410767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.410793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.410995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.411021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.411220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.411249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.411422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.411448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.411628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.411653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.411819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.411848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.412041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.412073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.412297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.412322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.412489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.412519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.412691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.485 [2024-07-27 02:32:36.412716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.485 qpair failed and we were unable to recover it. 00:33:08.485 [2024-07-27 02:32:36.412941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.412970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.413162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.413191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.413402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.413427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.413599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.413627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.413814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.413842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.414004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.414029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.414205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.414231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.414427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.414452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.414623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.414648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.414830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.414856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.415056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.415089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.415282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.415308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.415496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.415744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.415773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.415986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.416012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.416206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.416235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.416434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.416462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.416655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.416681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.416871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.416899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.417113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.417142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.417344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.417370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.417513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.417539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.417747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.417776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.417963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.417988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.418174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.418203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.418368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.418396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.418593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.418618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.418808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.418995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.419023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.419207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.419236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.419416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.419442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.419635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.419664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.419886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.419912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.420117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.420146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.420330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.420358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.420544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.486 [2024-07-27 02:32:36.420569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.486 qpair failed and we were unable to recover it. 00:33:08.486 [2024-07-27 02:32:36.420768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.420796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.421005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.421190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.421216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.421363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.421388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.421584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.421612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.421814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.421839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.422065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.422094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.422295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.422323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.422530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.422727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.422756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.422946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.422974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.423152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.423179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.423370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.423399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.423592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.423620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.423814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.423839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.424005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.424034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.424240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.424265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.424442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.424467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.424659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.424687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.424908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.424936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.425118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.425149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.425350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.425378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.425547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.425575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.425802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.425827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.425978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.426003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.426192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.426218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.426390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.426415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.426645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.426673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.426833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.426861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.427034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.427065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.427275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.427304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.427494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.427522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.427684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.427710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.427907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.427936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.428117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.487 [2024-07-27 02:32:36.428146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.487 qpair failed and we were unable to recover it. 00:33:08.487 [2024-07-27 02:32:36.428313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.428339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.428491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.428533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.428737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.428763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.428941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.428967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.429134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.429160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.429327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.429355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.429554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.429580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.429780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.429990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.430015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.430218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.430244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.430447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.430475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.430640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.430668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.430890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.430920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.431114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.431143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.431303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.431331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.431523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.431548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.431746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.431774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.431930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.431958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.432154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.432181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.432377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.432405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.432593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.432621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.432788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.433003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.433031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.433234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.433260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.433413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.433439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.433636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.433664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.433849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.433875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.434032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.434057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.434289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.434318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.434525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.434550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.434750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.434775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.434968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.434997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.435199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.435228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.435426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.435451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.435618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.435646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.488 qpair failed and we were unable to recover it. 00:33:08.488 [2024-07-27 02:32:36.435835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.488 [2024-07-27 02:32:36.435863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.436082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.436109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.436320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.436348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.436514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.436542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.436734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.436759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.436913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.436938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.437132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.437161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.437327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.437353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.437583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.437611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.437806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.437834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.438003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.438028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.438199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.438225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.438419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.438447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.438691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.438716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.438918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.438946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.439130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.439159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.439367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.439593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.439621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.439839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.439868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.440068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.440095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.440271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.440300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.440493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.440521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.440716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.440933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.440962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.441178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.441207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.441368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.441394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.441584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.441612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.441801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.441829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.442002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.442028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.442183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.442208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.442409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.442437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.442647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.442673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.442875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.442904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.443126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.443164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.443364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.443391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.489 [2024-07-27 02:32:36.443560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.489 [2024-07-27 02:32:36.443588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.489 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.443760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.443788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.444008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.444033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.444181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.444207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.444361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.444572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.444598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.444825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.444853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.445039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.445076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.445249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.445274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.445450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.445475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.445651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.445684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.445849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.445875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.446072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.446101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.446327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.446353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.446553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.446578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.446797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.447105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.447134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.447361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.447580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.447605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.447827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.447855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.448041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.448070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.448285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.448313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.448503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.448531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.448742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.448767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.448940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.448965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.449110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.449136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.449311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.449336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.449539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.449568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.449763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.449791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.449963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.449988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.450138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.450163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.490 qpair failed and we were unable to recover it. 00:33:08.490 [2024-07-27 02:32:36.450389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.490 [2024-07-27 02:32:36.450416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.450609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.450634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.450833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.450861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.451028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.451264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.451289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.451495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.451536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.451731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.451763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.451936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.451961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.452185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.452214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.452383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.452411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.452608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.452634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.452854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.452882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.453099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.453127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.453330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.453356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.453552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.453580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.453744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.453771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.453968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.453993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.454180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.454206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.454379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.454405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.454551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.454576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.454752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.454781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.454975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.455191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.455217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.455448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.455475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.455682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.455710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.455883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.455908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.456106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.456135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.456324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.456352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.456545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.456571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.456765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.456793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.457012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.457040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.457238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.457263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.457432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.457633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.457658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.457870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.457895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.458117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.491 [2024-07-27 02:32:36.458146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.491 qpair failed and we were unable to recover it. 00:33:08.491 [2024-07-27 02:32:36.458345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.458373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.458535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.458560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.458743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.458930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.458958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.459138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.459163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.459328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.459371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.459538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.459566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.459735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.459761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.459927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.459955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.460144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.460173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.460356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.460381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.460565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.460591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.460792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.460821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.461021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.461046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.461208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.461234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.461426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.461454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.461678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.461703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.461878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.461906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.462098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.462127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.462301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.462326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.462516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.462544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.462735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.462764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.462969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.463199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.463227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.463448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.463476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.463657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.463683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.463877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.463905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.464103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.464129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.464308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.464334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.464535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.464563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.464791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.464819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.465056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.465087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.465321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.465349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.465541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.465568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.465744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.465769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.492 [2024-07-27 02:32:36.465996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.492 [2024-07-27 02:32:36.466024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.492 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.466220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.466249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.466420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.466445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.466637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.466875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.466901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.467076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.467103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.467304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.467333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.467493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.467521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.467690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.467715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.467883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.467939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.468111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.468139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.468341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.468366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.468537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.468565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.468753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.468781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.468973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.468998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.469158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.469187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.469374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.469402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.469600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.469625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.469849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.469877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.470079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.470105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.470304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.470330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.470505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.470533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.470699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.470727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.470904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.470930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.471074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.471099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.471290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.471318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.471515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.471542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.471736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.471765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.471980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.472008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.472241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.472267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.472463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.472492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.472665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.472692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.472890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.472915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.473094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.473120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.473315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.473344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.473512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.493 [2024-07-27 02:32:36.473537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.493 qpair failed and we were unable to recover it. 00:33:08.493 [2024-07-27 02:32:36.473718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.473743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.473941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.473969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.474143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.474168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.474347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.474373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.474532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.474560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.474756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.474782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.475001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.475029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.475227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.475255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.475430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.475649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.475677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.475839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.475867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.476066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.476093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.476287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.476315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.476508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.476536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.476725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.476751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.476979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.477007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.477196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.477224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.477434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.477459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.477639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.477664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.477859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.477887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.478076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.478102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.478306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.478339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.478504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.478532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.478721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.478747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.478942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.479168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.479198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.479368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.479393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.479590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.479618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.479803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.479831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.480026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.480051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.480267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.480296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.480479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.480507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.480694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.480719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.480940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.481159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.481188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.494 [2024-07-27 02:32:36.481389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.494 [2024-07-27 02:32:36.481415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.494 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.481583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.481611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.481794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.481822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.482043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.482223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.482400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.482595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.482824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.482991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.483019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.483203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.483228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.483394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.483422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.483590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.483617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.483806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.483831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.484024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.484052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.484281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.484309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.484527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.484552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.484746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.484774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.484926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.484954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.485150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.485177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.485369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.485397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.485578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.485603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.485754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.485779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.485976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.486188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.486217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.486416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.486441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.486641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.486668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.486821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.486849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.487013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.487042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.487235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.487261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.487432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.487458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.487657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.487682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.495 qpair failed and we were unable to recover it. 00:33:08.495 [2024-07-27 02:32:36.487846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.495 [2024-07-27 02:32:36.487874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.488081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.488111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.488297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.488323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.488542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.488570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.488760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.488785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.488930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.488956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.489122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.489148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.489365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.489390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.489590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.489616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.489825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.489850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.490025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.490051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.490243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.490268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.490424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.490449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.490647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.490675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.490895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.490920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.491127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.491153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.491349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.491378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.491572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.491596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.491782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.491810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.492003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.492033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.492217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.492243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.492459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.492487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.492658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.492686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.492887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.492918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.493157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.493186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.493356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.493386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.493578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.493604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.493781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.493809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.493999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.494027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.494192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.494218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.494416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.494444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.494598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.494626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.494812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.494838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.495006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.495034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.495198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.495226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.496 qpair failed and we were unable to recover it. 00:33:08.496 [2024-07-27 02:32:36.495427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.496 [2024-07-27 02:32:36.495452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.495624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.495652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.495824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.495852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.496020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.496045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.496228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.496257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.496430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.496628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.496653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.496849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.496874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.497084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.497113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.497287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.497312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.497505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.497535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.497729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.497757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.497979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.498007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.498203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.498229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.498418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.498446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.498638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.498667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.498872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.498901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.499089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.499117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.499315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.499340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.499505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.499534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.499725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.499753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.499913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.499939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.500114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.500140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.500280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.500305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.500447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.500472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.500644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.500669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.500864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.500892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.501111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.501137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.501287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.501313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.501540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.501569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.501758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.501784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.501956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.501984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.502175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.502204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.502409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.502434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.502632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.502660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.502824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.497 [2024-07-27 02:32:36.502852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.497 qpair failed and we were unable to recover it. 00:33:08.497 [2024-07-27 02:32:36.503013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.503038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.503188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.503442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.503470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.503673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.503895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.503923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.504097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.504126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.504316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.504341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.504540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.504568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.504768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.504794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.504969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.504994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.505184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.505213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.505402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.505430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.505628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.505653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.505849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.505877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.506106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.506131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.506280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.506306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.506507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.506535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.506699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.506727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.506918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.506943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.507151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.507180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.507401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.507430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.507646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.507671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.507869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.508052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.508085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.508290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.508315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.508512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.508540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.508751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.508779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.508952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.508978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.509133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.509162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.509380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.509408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.509591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.509617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.509781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.509811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.509985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.510010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.510190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.510216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.510373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.510400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.510591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.498 [2024-07-27 02:32:36.510619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.498 qpair failed and we were unable to recover it. 00:33:08.498 [2024-07-27 02:32:36.510816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.510841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.511065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.511093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.511285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.511314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.511486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.511512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.511708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.511736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.511933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.511961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.512160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.512186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.512378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.512406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.512625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.512654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.512858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.512883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.513107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.513136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.513363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.513396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.513602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.513628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.513784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.513809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.513984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.514009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.514219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.514244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.514437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.514465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.514655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.514683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.514872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.514897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.515048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.515078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.515231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.515256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.515431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.515456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.515650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.515678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.515843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.515871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.516098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.516123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.516330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.516358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.516528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.516556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.516747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.516773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.516924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.516949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.517170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.517199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.517383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.517409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.517607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.517632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.517801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.517830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.518001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.518026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.499 qpair failed and we were unable to recover it. 00:33:08.499 [2024-07-27 02:32:36.518206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.499 [2024-07-27 02:32:36.518231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.518374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.518399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.518550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.518576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.518799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.518827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.518980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.519013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.519190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.519216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.519407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.519437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.519634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.519662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.519832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.519859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.520049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.520083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.520271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.520299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.520466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.520492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.520713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.520742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.520936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.520964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.521191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.521217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.521423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.521451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.521676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.521701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.521881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.521907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.522111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.522140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.522365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.522394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.522593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.522618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.522815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.522842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.523038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.523072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.523263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.523289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.523474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.523502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.523673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.523701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.523890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.523916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.524116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.524144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.524304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.524332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.524521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.524546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.524725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.524751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.524889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.500 [2024-07-27 02:32:36.524919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.500 qpair failed and we were unable to recover it. 00:33:08.500 [2024-07-27 02:32:36.525120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.525147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.525358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.525383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.525571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.525599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.525773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.525798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.525951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.525976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.526179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.526205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.526401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.526625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.526650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.526864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.526893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.527068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.527093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.527285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.527313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.527543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.527568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.527764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.527789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.527969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.527997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.528195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.528223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.528428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.528454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.528603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.528628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.528843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.528868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.529039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.529070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.529242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.529270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.529458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.529484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.529685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.529711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.529908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.529936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.530109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.530138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.530325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.530547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.530575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.530788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.530816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.530991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.531017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.531191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.531220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.531376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.531404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.531621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.531647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.531820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.531847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.532040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.532073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.532297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.532322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.532534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.501 [2024-07-27 02:32:36.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.501 qpair failed and we were unable to recover it. 00:33:08.501 [2024-07-27 02:32:36.532794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.532819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.532991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.533017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.533185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.533211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.533364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.533389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.533564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.533590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.533809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.533841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.534041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.534073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.534227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.534253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.534447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.534475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.534673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.534701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.534922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.534947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.535099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.535125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.535327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.535355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.535579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.535604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.535804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.535832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.536026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.536254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.536457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.536483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.536710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.536739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.536941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.536966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.537170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.537198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.537387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.537416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.537616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.537641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.537834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.537862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.538056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.538100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.538339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.538365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.538531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.538559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.538741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.538769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.538939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.538965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.539163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.539192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.539395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.539421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.539590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.539616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.539839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.539871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.540068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.540097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.540270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.540295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.540518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.502 [2024-07-27 02:32:36.540546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.502 qpair failed and we were unable to recover it. 00:33:08.502 [2024-07-27 02:32:36.540739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.540768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.540967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.540992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.541172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.541200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.541388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.541416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.541615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.541640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.541793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.541818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.541991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.542016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.542219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.542245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.542415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.542443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.542608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.542635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.542842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.542868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.543035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.543081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.543307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.543336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.543509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.543534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.543732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.543760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.543921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.543949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.544124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.544323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.544352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.544524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.544553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.544722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.544747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.544898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.544923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.545110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.545139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.545326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.545351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.545503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.545532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.545708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.545734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.545935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.545960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.546164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.546193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.546384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.546607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.546632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.546821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.546849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.547022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.547050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.547222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.547248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.547428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.547454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.547655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.547683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.547873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.547898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.503 qpair failed and we were unable to recover it. 00:33:08.503 [2024-07-27 02:32:36.548119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.503 [2024-07-27 02:32:36.548149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.548342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.548370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.548543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.548569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.548716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.548741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.548964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.548992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.549155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.549181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.549377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.549405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.549594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.549622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.549853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.549879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.550050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.550085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.550280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.550308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.550526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.550552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.550772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.550800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.550999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.551024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.551233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.551259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.551459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.551487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.551680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.551708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.551908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.551934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.552087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.552112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.552309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.552337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.552511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.552536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.552731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.552759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.552978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.553003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.553209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.553235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.553441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.553469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.553670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.553696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.553834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.553860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.554043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.554076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.554268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.554296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.554486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.554511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.554678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.554706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.554893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.554921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.555101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.555128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.504 qpair failed and we were unable to recover it. 00:33:08.504 [2024-07-27 02:32:36.555350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.504 [2024-07-27 02:32:36.555378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.555567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.555592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.555773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.555798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.555994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.556022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.556233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.556262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.556430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.556455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.556617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.556646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.556811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.556840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.557074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.557100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.557256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.557282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.557463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.557489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.557657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.557874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.557902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.558096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.558126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.558326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.558351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.558546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.558574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.558770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.558798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.558988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.559013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.559201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.559227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.559426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.559454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.559629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.559656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.559809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.559851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.560067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.560095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.560263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.560292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.560486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.560514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.560732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.560760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.560985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.561010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.561214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.561242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.561411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.561439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.561631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.561656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.561857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.561882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.562040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.562075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.562265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.562291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.562520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.505 [2024-07-27 02:32:36.562549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.505 qpair failed and we were unable to recover it. 00:33:08.505 [2024-07-27 02:32:36.562759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.562784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.562959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.562984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.563184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.563212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.563383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.563602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.563627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.563813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.563840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.564030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.564064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.564268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.564293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.564490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.564518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.564733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.564761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.564949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.564974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.565181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.565209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.565398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.565426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.565616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.565641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.565844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.565870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.566028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.566053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.566237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.566267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.566494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.566519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.566689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.566714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.566885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.566910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.567054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.567086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.567266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.567291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.567465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.567490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.567687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.567715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.567881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.567908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.568072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.568098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.568266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.568294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.568491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.568519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.568714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.568739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.568950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.568975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.569168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.569196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.569395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.569420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.569644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.569671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.569881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.569906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.570080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.570105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.506 [2024-07-27 02:32:36.570276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.506 [2024-07-27 02:32:36.570303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.506 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.570499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.570527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.570692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.570717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.570875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.570903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.571101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.571129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.571325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.571350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.571497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.571522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.571693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.571718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.571893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.571922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.572118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.572147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.572332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.572359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.572509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.572534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.572732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.572760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.572930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.572958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.573127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.573153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.573371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.573399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.573559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.573587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.573780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.573805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.573994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.574022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.574247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.574275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.574506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.574531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.574759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.574788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.574992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.575022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.575219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.575245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.575466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.575494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.575662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.575691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.575852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.575878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.576075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.576104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.576294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.576322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.576494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.576520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.576664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.576690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.576846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.576871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.577052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.577088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.577317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.577345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.577547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.577572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.577773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.507 [2024-07-27 02:32:36.577799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.507 qpair failed and we were unable to recover it. 00:33:08.507 [2024-07-27 02:32:36.577999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.578027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.578227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.578255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.578430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.578455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.578681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.578709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.578910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.578938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.579134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.579321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.579349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.579573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.579823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.580011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.580039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.580212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.580240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.580442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.580467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.580663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.580690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.580855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.580889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.581100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.581126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.581304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.581329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.581549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.581763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.581788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.581990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.582019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.582210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.582239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.582434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.582460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.582660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.582688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.582876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.582904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.583102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.583127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.583357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.583385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.583552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.583777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.583803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.584004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.584032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.584254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.584282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.584482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.584507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.584709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.584737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.584931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.584959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.585188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.585214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.585413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.585442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.585631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.585657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.585830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.585855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.586054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.508 [2024-07-27 02:32:36.586087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.508 qpair failed and we were unable to recover it. 00:33:08.508 [2024-07-27 02:32:36.586284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.586312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.586512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.586537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.586700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.586728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.586941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.586973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.587177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.587203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.587354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.587379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.587562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.587590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.587779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.587804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.587962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.587990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.588219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.588244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.588415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.588670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.588698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.588891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.588919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.589092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.589119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.589318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.589346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.589543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.589569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.589771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.589796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.589993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.590021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.590211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.590239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.590407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.590432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.590630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.590658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.590879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.590904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.591075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.591101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.591266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.591291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.591439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.591464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.591645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.591671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.591844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.591871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.592068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.592096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.592290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.592315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.592519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.592545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.592737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.592769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.592961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.592986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.593180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.593209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.509 [2024-07-27 02:32:36.593379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.509 [2024-07-27 02:32:36.593407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.509 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.593597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.593622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.593798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.593823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.593961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.593986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.594166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.594192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.594423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.594449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.594596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.594621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.594799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.594824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.595011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.595039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.595225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.595251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.595424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.595449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.595628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.595656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.595862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.595887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.596069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.596095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.596247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.596272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.596445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.596470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.596623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.596650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.596823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.596852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.597012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.597041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.597238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.597263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.597422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.597450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.597616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.597644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.597841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.597866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.598066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.598095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.598314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.598342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.598537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.598562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.598707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.598732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.598907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.598932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.599109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.599135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.599334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.599362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.599558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.599587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.599754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.599779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.599960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.599985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.600216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.600245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.600433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.600458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.600653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.600680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.600845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.600872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.601070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.601096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.510 [2024-07-27 02:32:36.601300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.510 [2024-07-27 02:32:36.601328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.510 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.601520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.601549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.601734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.601759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.601899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.601940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.602130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.602159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.602379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.602404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.602578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.602606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.602796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.602824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.603023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.603048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.603235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.603261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.603458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.603484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.511 [2024-07-27 02:32:36.603719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.511 [2024-07-27 02:32:36.603744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.511 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.603892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.603919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.604122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.604152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.604331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.604356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.604536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.604561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.604709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.604735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.604915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.604940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.605152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.605181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.605367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.605395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.605633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.605659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.605811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.605836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.606030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.606068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.606297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.606323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.606537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.606566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.606773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.606798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.606981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.607007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.607155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.607185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.607361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.607389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.607583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.607609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.607804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.607831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.607990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.608018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.608230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.608256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.608457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.608485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.608699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.608728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.608943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.608968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.609136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.609359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.609387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.609553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.790 [2024-07-27 02:32:36.609578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.790 qpair failed and we were unable to recover it. 00:33:08.790 [2024-07-27 02:32:36.609755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.609780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.609951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.609976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.610127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.610153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.610337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.610362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.610557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.610586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.610760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.610787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.610941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.610967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.611139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.611164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.611340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.611366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.611584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.611612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.611781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.611809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.611997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.612022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.612265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.612294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.612530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.612555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.612756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.612781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.612952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.612985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.613172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.613201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.613369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.613394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.613549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.613575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.613775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.613803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.613994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.614199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.614377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.614548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.614765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.614948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.614976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.615215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.615241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.615405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.615433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.615649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.615677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.615894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.615919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.616085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.616115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.616282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.616310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.616539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.616564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.616769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.616797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.616998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.617026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.791 qpair failed and we were unable to recover it. 00:33:08.791 [2024-07-27 02:32:36.617227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.791 [2024-07-27 02:32:36.617252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.617447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.617475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.617635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.617663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.617866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.617891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.618117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.618146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.618343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.618368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.618539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.618564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.618716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.618741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.618937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.618965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.619144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.619169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.619319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.619360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.619557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.619582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.619765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.619790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.619964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.619990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.620215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.620244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.620439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.620464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.620651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.620680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.620841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.620869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.621088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.621114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.621306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.621334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.621532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.621560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.621739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.621765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.621919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.621945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.622135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.622164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.622327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.622352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.622505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.622530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.622681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.622706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.622874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.622899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.623088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.623117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.623285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.623313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.623510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.623537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.623765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.623793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.624001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.624027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.624242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.624268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.624419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.624445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.792 [2024-07-27 02:32:36.624653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.792 [2024-07-27 02:32:36.624678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.792 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.624891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.624917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.625102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.625131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.625358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.625386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.625577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.625602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.625779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.625804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.625948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.625973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.626174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.626200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.626404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.626432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.626610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.626635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.626812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.627076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.627105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.627300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.627328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.627521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.627551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.627742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.627770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.627992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.628022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.628203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.628229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.628433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.628461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.628653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.628858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.628885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.629114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.629143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.629345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.629370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.629548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.629573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.629725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.629750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.629896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.629937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.630130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.630156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.630315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.630343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.630513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.630542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.630733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.630758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.630972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.631000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.631161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.631191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.631363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.631389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.631582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.631610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.631788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.631814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.632012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.632037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.632192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.793 [2024-07-27 02:32:36.632217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.793 qpair failed and we were unable to recover it. 00:33:08.793 [2024-07-27 02:32:36.632413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.632441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.632636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.632661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.632835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.632862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.633070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.633100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.633277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.633308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.633496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.633732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.633759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.633938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.634116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.634143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.634343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.634370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.634593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.634631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.634799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.634839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.635008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.635037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.635257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.635283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.635455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.635483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.635651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.635681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.636010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.636056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.636304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.636331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.636570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.636600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.636801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.636828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.637056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.637093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.637257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.637454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.637482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.637648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.637683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.637840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.637867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.638019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.638045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.638199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.638227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.638450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.638481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.638685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.638712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.638876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.638902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.639107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.639138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.639317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.639347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.639547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.639577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.639738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.639768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.639940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.639966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.794 [2024-07-27 02:32:36.640203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.794 [2024-07-27 02:32:36.640234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.794 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.640434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.640464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.640670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.640697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.640917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.640946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.641175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.641206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.641374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.641400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.641578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.641606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.641848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.641874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.642085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.642112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.642312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.642342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.642513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.642553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.642765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.642792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.643017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.643046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.643230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.643259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.643460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.643488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.643707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.643738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.643931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.644165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.644192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.644401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.644445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.644652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.644681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.644882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.644909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.645132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.645162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.645361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.645390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.645564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.645590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.645742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.645960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.645989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.646237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.646264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.646468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.646498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.646694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.646723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.646923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.646949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.795 qpair failed and we were unable to recover it. 00:33:08.795 [2024-07-27 02:32:36.647128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.795 [2024-07-27 02:32:36.647158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.647364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.647390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.647591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.647618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.647833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.647862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.648093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.648120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.648275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.648301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.648452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.648479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.648667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.648708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.648904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.648933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.649132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.649163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.649397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.649424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.649598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.649625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.649800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.649827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.650011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.650041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.650245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.650271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.650471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.650696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.650723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.650926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.650952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.651116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.651156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.651327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.651357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.651535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.651561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.651771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.651813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.652048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.652084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.652261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.652288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.652490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.652519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.652703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.652731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.652936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.652964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.653207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.653236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.653442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.653487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.653664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.653914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.653944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.654169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.654201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.654379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.654407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.654638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.654668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.654866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.796 [2024-07-27 02:32:36.654896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.796 qpair failed and we were unable to recover it. 00:33:08.796 [2024-07-27 02:32:36.655076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.655104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.655330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.655361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.655588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.655615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.655816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.655843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.656005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.656037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.656258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.656286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.656435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.656463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.656657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.656687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.656901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.656930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.657112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.657140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.657343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.657373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.657568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.657598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.657826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.657860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.658070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.658100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.658289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.658322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.658490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.658518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.658742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.658772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.659005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.659032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.659235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.659263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.659419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.659446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.659635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.659665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.659902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.659931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.660165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.660198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.660419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.660449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.660675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.660702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.660891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.660921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.661093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.661123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.661321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.661348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.661520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.661549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.661741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.661771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.661978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.662005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.662179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.662211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.662382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.662415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.662586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.662615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.797 [2024-07-27 02:32:36.662841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.797 [2024-07-27 02:32:36.662871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.797 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.663081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.663112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.663317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.663344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.663537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.663569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.663768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.663799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.663975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.664002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.664204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.664238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.664458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.664487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.664695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.664722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.664944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.664975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.665160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.665187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.665387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.665414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.665644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.665677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.665875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.665904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.666080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.666108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.666289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.666315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.666520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.666552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.666751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.666778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.666972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.667006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.667214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.667247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.667440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.667467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.667690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.667720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.667943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.667973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.668143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.668171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.668395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.668425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.668625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.668651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.668860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.669082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.669117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.669338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.669371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.669549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.669578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.669758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.669786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.670012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.670042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.670267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.670294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.670444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.670471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.670642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.670669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.670878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.798 [2024-07-27 02:32:36.670905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.798 qpair failed and we were unable to recover it. 00:33:08.798 [2024-07-27 02:32:36.671114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.671142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.671376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.671406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.671601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.671629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.671816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.671846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.672074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.672104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.672302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.672329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.672529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.672559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.672753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.672783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.673009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.673037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.673292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.673333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.673546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.673578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.673766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.673793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.674015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.674051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.674292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.674328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.674509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.674536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.674712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.674740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.674950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.674980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.675181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.675209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.675407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.675438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.675638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.675669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.675845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.675873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.676102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.676132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.676322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.676357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.676581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.676609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.676963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.676993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.677213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.677241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.677399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.677437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.677763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.677819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.678014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.678049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.678293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.678330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.678723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.678779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.678994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.679025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.679231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.679259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.679538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.679588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.799 [2024-07-27 02:32:36.679809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.799 [2024-07-27 02:32:36.679839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.799 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.680036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.680069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.680300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.680329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.680556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.680587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.680761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.680789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.681006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.681036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.681206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.681235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.681454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.681481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.681634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.681662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.681884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.681914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.682152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.682180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.682398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.682426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.682628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.682671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.682848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.682875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.683052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.683116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.683354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.683382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.683566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.683593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.683956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.684008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.684221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.684249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.684454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.684480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.684829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.684885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.685104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.685134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.685338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.685364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.685537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.685567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.685770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.685805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.686011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.686225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.686255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.686462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.686493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.686671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.686703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.686880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.687087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.800 [2024-07-27 02:32:36.687119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.800 qpair failed and we were unable to recover it. 00:33:08.800 [2024-07-27 02:32:36.687319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.687347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.687502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.687538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.687741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.687787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.687958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.687985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.688179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.688210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.688412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.688455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.688665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.688703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.688922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.688952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.689152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.689180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.689387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.689414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.689608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.689639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.689864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.689893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.690111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.690142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.690336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.690366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.690593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.690623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.690822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.690849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.691051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.691308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.691339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.691559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.691586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.691781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.691808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.692042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.692079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.692290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.692327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.692565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.692595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.692788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.692818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.693055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.693089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.693297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.693333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.693537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.693743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.693771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.693970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.694000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.694201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.694229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.694435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.694462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.694741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.694793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.695012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.695042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.695258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.695291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.695488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.801 [2024-07-27 02:32:36.695518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.801 qpair failed and we were unable to recover it. 00:33:08.801 [2024-07-27 02:32:36.695718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.695748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.695950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.695978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.696163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.696196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.696387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.696416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.696647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.696673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.696912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.696943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.697142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.697173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.697350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.697377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.697559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.697586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.697762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.697789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.697994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.698022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.698252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.698282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.698499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.698529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.698734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.698762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.698965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.698996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.699206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.699233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.699408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.699435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.699641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.699677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.699860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.699889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.700092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.700119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.700325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.700355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.700556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.700584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.700762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.700791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.700992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.701022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.701198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.701225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.701373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.701408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.701647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.701689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.701920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.701949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.702117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.702144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.702327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.702360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.702590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.702620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.702785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.702812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.702976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.703006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.703211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.703238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.703426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.802 [2024-07-27 02:32:36.703454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.802 qpair failed and we were unable to recover it. 00:33:08.802 [2024-07-27 02:32:36.703711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.703755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.703966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.703999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.704208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.704237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.704463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.704716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.704746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.704944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.705206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.705239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.705469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.705505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.705680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.705708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.705882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.705909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.706135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.706170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.706369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.706396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.706592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.706623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.706820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.706851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.707083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.707123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.707301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.707331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.707523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.707557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.707754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.707781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.707958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.707985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.708226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.708256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.708456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.708486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.708851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.708914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.709088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.709118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.709280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.709308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.709543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.709573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.709775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.709806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.709994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.710021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.710229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.710260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.710454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.710486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.710692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.710720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.710889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.710919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.711088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.711119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.711287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.711318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.711514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.711545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.711744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.803 [2024-07-27 02:32:36.711776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.803 qpair failed and we were unable to recover it. 00:33:08.803 [2024-07-27 02:32:36.712008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.712035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.712218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.712249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.712458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.712492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.712736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.712763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.712962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.712991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.713186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.713218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.713399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.713426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.713633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.713681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.713909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.713938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.714142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.714172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.714325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.714352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.714532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.714559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.714737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.714772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.714952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.714979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.715176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.715210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.715388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.715415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.715687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.715738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.715956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.715986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.716149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.716188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.716419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.716451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.716616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.716649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.716824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.716851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.717042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.717077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.717250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.717281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.717491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.717520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.717879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.717936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.718151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.718180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.718380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.718410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.718678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.718709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.718903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.718933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.719147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.719175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.719379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.719412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.719639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.719682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.719896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.719937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.720131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.804 [2024-07-27 02:32:36.720177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.804 qpair failed and we were unable to recover it. 00:33:08.804 [2024-07-27 02:32:36.720430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.720463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.720701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.720727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.720969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.721000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.721215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.721246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.721440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.721670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.721697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.721901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.721931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.722130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.722158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.722356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.722386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.722611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.722652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.722824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.722851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.723077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.723119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.723357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.723384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.723551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.723579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.723785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.723816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.724012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.724042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.724245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.724272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.724498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.724535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.724740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.724770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.724959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.724987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.725230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.725257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.725412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.725457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.725668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.725711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.725885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.725918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.726127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.726158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.726360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.726389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.726604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.726635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.726853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.726884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.727108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.727136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.727345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.805 [2024-07-27 02:32:36.727580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.805 [2024-07-27 02:32:36.727613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.805 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.727835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.727862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.728090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.728121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.728344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.728562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.728591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.728827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.728858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.729073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.729101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.729309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.729336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.729498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.729525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.729718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.729769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.730001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.730028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.730224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.730256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.730456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.730484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.730706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.730736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.730940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.730976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.731202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.731430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.731457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.731679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.731706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.731916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.731959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.732161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.732204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.732449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.732479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.732646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.732676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.732908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.732936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.733124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.733169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.733366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.733396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.733624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.733665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.733838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.733873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.734106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.734139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.734371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.734399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.734574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.734605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.734799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.734829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.735022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.735049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.735244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.735273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.735481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.735511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.735731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.735773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.736085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.806 [2024-07-27 02:32:36.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.806 qpair failed and we were unable to recover it. 00:33:08.806 [2024-07-27 02:32:36.736291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.736324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.736515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.736544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.736800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.736830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.737057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.737094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.737308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.737337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.737577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.737607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.737830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.738057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.738092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.738251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.738279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.738518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.738550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.738762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.738791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.739011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.739042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.739299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.739326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.739529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.739557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.739826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.740052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.740104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.740289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.740315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.740539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.740569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.740823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.740855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.741096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.741124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.741328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.741359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.741566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.741597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.741840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.741867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.742070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.742101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.742285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.742316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.742523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.742550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.742783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.742815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.743036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.743078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.743275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.743303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.743477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.743508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.743728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.743759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.743959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.743987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.744217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.744248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.744470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.744500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.744705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.807 [2024-07-27 02:32:36.744732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.807 qpair failed and we were unable to recover it. 00:33:08.807 [2024-07-27 02:32:36.744998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.745029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.745261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.745291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.745493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.745521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.745726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.745757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.745982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.746024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.746223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.746251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.746477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.746508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.746699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.746730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.746914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.746941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.747176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.747207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.747435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.747465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.747645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.747673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.747851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.747879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.748100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.748131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.748343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.748370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.748583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.748614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.748788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.748819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.749039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.749071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.749340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.749371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.749591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.749621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.749821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.749848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.749999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.750026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.750229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.750261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.750482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.750514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.750723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.750753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.750947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.750978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.751205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.751234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.751407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.751438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.751635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.751663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.751873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.751900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.752166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.752197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.752384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.752414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.752643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.752671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.752833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.752860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.753037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.808 [2024-07-27 02:32:36.753070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.808 qpair failed and we were unable to recover it. 00:33:08.808 [2024-07-27 02:32:36.753248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.753275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.753481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.753512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.753716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.753747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.753924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.753952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.754158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.754189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.754387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.754414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.754595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.754622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.754804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.754832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.755021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.755048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.755233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.755261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.755432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.755462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.755681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.755712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.755943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.755971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.756148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.756176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.756358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.756386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.756569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.756597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.756772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.756803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.757025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.757055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.757260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.757289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.757510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.757541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.757739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.757770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.757945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.757974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.758178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.758209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.758371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.758402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.758622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.758648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.758848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.758875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.759105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.759355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.759382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.759579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.759614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.759834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.759864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.760040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.760079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.760250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.760281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.760551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.760581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.809 [2024-07-27 02:32:36.760835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.809 [2024-07-27 02:32:36.760862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.809 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.761071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.761103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.761299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.761326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.761513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.761540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.761730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.761761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.761991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.762021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.762222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.762250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.762450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.762482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.762702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.762732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.762973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.763000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.763213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.763244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.763480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.763507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.763717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.763745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.763942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.763973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.764155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.764187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.764411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.764439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.764618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.764650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.764845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.764876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.765071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.765099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.765322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.765353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.765532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.765562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.765771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.765798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.765974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.766000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.766245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.766276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.766462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.766489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.766641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.766668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.766874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.766902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.767168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.767195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.767425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.767455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.767668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.767698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.767919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.767947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.810 [2024-07-27 02:32:36.768154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.810 [2024-07-27 02:32:36.768185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.810 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.768379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.768410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.768641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.768668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.768902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.768932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.769125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.769161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.769379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.769405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.769581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.769609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.769815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.769845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.770071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.770099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.770324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.770575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.770605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.770826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.770854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.771021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.771047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.771336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.771368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.771574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.771617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.771826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.771856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.772051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.772092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.772298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.772326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.772525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.772555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.772745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.772777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.772980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.773023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.773226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.773258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.773478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.773509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.773724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.773750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.773961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.773988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.774164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.774193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.774375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.774403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.774624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.774654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.774858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.774885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.775091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.775120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.775293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.775320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.775532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.775576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.775759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.775800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.775982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.776258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.776289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.776484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.811 [2024-07-27 02:32:36.776512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.811 qpair failed and we were unable to recover it. 00:33:08.811 [2024-07-27 02:32:36.776701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.776732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.777006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.777036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.777218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.777246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.777418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.777449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.777655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.777685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.777920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.777948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.778143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.778174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.778399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.778429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.778626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.778658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.778877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.778907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.779127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.779155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.779340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.779381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.779587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.779617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.779836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.779866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.780091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.780118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.780322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.780553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.780580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.780789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.780817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.781014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.781045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.781271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.781301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.781517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.781544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.781743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.781773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.781973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.782000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.782238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.782265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.782489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.782516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.782781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.782811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.783090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.783118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.783376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.783403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.783595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.783623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.783830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.783857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.784047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.784081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.784237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.784264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.784443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.784471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.784674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.784704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.784894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.784924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.812 [2024-07-27 02:32:36.785192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.812 [2024-07-27 02:32:36.785219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.812 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.785432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.785459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.785624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.785666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.785909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.785937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.786135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.786165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.786381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.786409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.786580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.786607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.786807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.786837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.787030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.787078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.787276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.787304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.787606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.787636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.787848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.788044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.788080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.788289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.788323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.788490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.788521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.788749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.788776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.788975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.789005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.789201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.789233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.789457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.789484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.789717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.789747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.789941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.789972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.790201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.790228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.790437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.790468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.790636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.790666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.790881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.790909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.791163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.791195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.791415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.791446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.791669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.791696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.791938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.791968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.792172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.792198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.792407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.792433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.792632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.792663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.792829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.792859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.793068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.793098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.793295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.793322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.813 [2024-07-27 02:32:36.793529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.813 [2024-07-27 02:32:36.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.813 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.793779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.793806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.793978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.794005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.794228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.794259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.794437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.794465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.794671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.794702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.794888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.794919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.795114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.795142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.795328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.795355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.795534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.795562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.795810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.795838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.796043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.796082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.796272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.796303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.796584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.796612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.796835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.796866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.797069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.797099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.797303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.797346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.797572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.797603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.797800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.797835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.798079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.798106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.798393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.798422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.798614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.798645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.798874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.798902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.799104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.799135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.799331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.799363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.799577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.799604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.799845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.799875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.800095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.800126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.800344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.800371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.800627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.800654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.800884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.800914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.801139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.801167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.814 qpair failed and we were unable to recover it. 00:33:08.814 [2024-07-27 02:32:36.801376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.814 [2024-07-27 02:32:36.801407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.801601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.801631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.801860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.801888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.802122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.802152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.802347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.802378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.802569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.802596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.802800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.802828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.803045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.803085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.803313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.803340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.803570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.803600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.803830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.803871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.804182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.804456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.804487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.804709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.804736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.804913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.804941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.805147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.805178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.805343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.805375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.805590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.805617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.805817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.805849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.806076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.806103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.806269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.806296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.806488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.806519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.806752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.806780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.807030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.807057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.807280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.807310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.807503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.807534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.807745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.808010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.808041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.808286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.808317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.808510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.808537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.808769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.809029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.809069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.809262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.809288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.809517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.815 [2024-07-27 02:32:36.809547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.815 qpair failed and we were unable to recover it. 00:33:08.815 [2024-07-27 02:32:36.809703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.809734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.809926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.809953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.810168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.810197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.810340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.810367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.810554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.810581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.810798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.810829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.811035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.811072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.811276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.811318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.811494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.811520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.811751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.811780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.811987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.812031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.812252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.812284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.812486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.812513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.812714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.812741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.812965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.812995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.813206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.813237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.813423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.813450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.813677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.813706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.813897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.813927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.814124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.814152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.814330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.814360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.814551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.814582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.814779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.814806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.815072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.815294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.815325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.815530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.815753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.815784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.815977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.816008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.816202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.816229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.816435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.816467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.816687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.816717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.816915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.816943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.817144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.817180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.817373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.817404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.817602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.816 [2024-07-27 02:32:36.817630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.816 qpair failed and we were unable to recover it. 00:33:08.816 [2024-07-27 02:32:36.817835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.817862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.818073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.818104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.818285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.818312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.818513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.818544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.818773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.818801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.818999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.819026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.819272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.819303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.819494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.819525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.819728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.819756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.819953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.819980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.820192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.820224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.820455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.820483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.820689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.820719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.820907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.820937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.821133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.821161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.821365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.821395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.821625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.821653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.821869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.821896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.822084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.822115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.822292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.822322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.822519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.822547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.822816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.822842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.823035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.823078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.823251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.823278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.823508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.823539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.823737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.823767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.823970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.823997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.824216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.824261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.824519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.824545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.824755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.824783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.824955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.824983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.825213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.825245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.825439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.825466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.825662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.825692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.825894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.817 [2024-07-27 02:32:36.825921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.817 qpair failed and we were unable to recover it. 00:33:08.817 [2024-07-27 02:32:36.826132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.826418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.826448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.826615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.826651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.826856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.826884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.827077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.827108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.827284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.827325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.827531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.827573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.827740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.827770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.827963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.827993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.828211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.828238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.828420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.828448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.828634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.828664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.828849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.828876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.829076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.829107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.829337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.829367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.829587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.829613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.829826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.829857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.830056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.830095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.830293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.830717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.830769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.830965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.830994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.831219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.831247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.831468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.831499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.831684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.831714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.831902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.831929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.832119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.832147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.832419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.832449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.832645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.832672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.832907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.832937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.833129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.833160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.833383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.833410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.833614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.833644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.833861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.833892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.834122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.834149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.834350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.834380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.818 qpair failed and we were unable to recover it. 00:33:08.818 [2024-07-27 02:32:36.834578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.818 [2024-07-27 02:32:36.834608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.834800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.834828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.835032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.835069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.835271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.835302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.835480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.835507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.835676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.835703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.835945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.835976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.836163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.836195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.836429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.836460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.836659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.836689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.836889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.836916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.837152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.837184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.837361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.837390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.837593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.837621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.837792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.837822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.838018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.838045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.838265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.838293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.838528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.838558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.838783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.838812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.839012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.839039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.839288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.839318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.839488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.839526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.839753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.839781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.840014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.840044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.840291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.840318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.840491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.840519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.840716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.840747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.840944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.841178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.841206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.841434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.841465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.819 [2024-07-27 02:32:36.841687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.819 [2024-07-27 02:32:36.841718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.819 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.841891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.841917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.842070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.842095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.842316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.842343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.842578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.842603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.842788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.842812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.842962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.843002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.843194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.843219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.843397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.843599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.843623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.843819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.843843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.844045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.844081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.844283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.844312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.844515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.844540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.844771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.844799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.845005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.845033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.845252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.845278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.845513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.845546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.845735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.845764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.845933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.845959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.846153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.846180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.846397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.846426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.846595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.846621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.846780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.846807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.847034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.847080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.847309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.847335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.847481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.847506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.847685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.847712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.847862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.847888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.848096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.848125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.848296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.848326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.848505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.848532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.848691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.848717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.848914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.849140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.820 [2024-07-27 02:32:36.849166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.820 qpair failed and we were unable to recover it. 00:33:08.820 [2024-07-27 02:32:36.849342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.849368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.849572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.849600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.849784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.849812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.849970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.850202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.850246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.850443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.850471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.850697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.850728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.850949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.850979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.851204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.851233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.851436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.851472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.851680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.851710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.851905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.851933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.852138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.852169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.852364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.852394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.852571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.852599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.852825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.852856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.853069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.853096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.853301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.853328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.853542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.853574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.853746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.853777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.854000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.854028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.854251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.854282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.854509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.854540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.854755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.854783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.854980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.855011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.855227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.855260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.855462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.855490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.855659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.855687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.855877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.855907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.856094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.856123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.856305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.856336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.856561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.856592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.856814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.856842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.857052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.857091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.857258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.821 [2024-07-27 02:32:36.857294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.821 qpair failed and we were unable to recover it. 00:33:08.821 [2024-07-27 02:32:36.857499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.857531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.857750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.857781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.857974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.858006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.858228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.858257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.858493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.858523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.858716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.858747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.858948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.858977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.859161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.859199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.859467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.859498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.859711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.859739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.859956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.859987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.860192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.860223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.860416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.860445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.860637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.860664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.860883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.860931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.861154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.861182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.861376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.861407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.861604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.861634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.861807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.861834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.862039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.862085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.862289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.862320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.862517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.862545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.862767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.862797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.863005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.863036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.863282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.863310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.863518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.863549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.863744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.863774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.863968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.863996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.864235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.864266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.864477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.864505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.864696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.864723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.864925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.864952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.865157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.865188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.865414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.865442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.865673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.822 [2024-07-27 02:32:36.865704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.822 qpair failed and we were unable to recover it. 00:33:08.822 [2024-07-27 02:32:36.865895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.865925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.866155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.866183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.866448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.866475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.866669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.866696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.866899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.866927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.867120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.867159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.867387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.867417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.867580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.867608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.867919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.867969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.868171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.868202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.868387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.868415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.868599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.868627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.868855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.868886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.869104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.869135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.869342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.869370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.869574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.869605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.869804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.869835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.870022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.870053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.870231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.870260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.870443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.870474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.870645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.870673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.870853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.870881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.871031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.871078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.871309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.871339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.871543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.871571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.871794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.871825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.871997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.872025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.872199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.872231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.872456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.872489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.872683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.872714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.872909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.872946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.873155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.873187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.873387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.873414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.873615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.873648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.823 [2024-07-27 02:32:36.873853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.823 [2024-07-27 02:32:36.873884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.823 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.874113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.874144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.874372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.874403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.874630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.874657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.874837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.874864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.875071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.875104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.875300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.875331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.875559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.875820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.875848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.876076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.876107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.876307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.876334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.876518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.876546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.876728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.876756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.876959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.876989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.877189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.877221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.877423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.877454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.877632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.877659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.877864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.877893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.878138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.878169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.878347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.878375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.878572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.878600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.878844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.878874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.879040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.879088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.879281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.879314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.879534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.879563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.879787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.879834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.880038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.880077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.880277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.880310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.880537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.880564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.880711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.880740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.824 qpair failed and we were unable to recover it. 00:33:08.824 [2024-07-27 02:32:36.880889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.824 [2024-07-27 02:32:36.880919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.881088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.881121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.881377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.881405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.881643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.881674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.881895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.881923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.882155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.882188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.882369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.882399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.882596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.882627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.882821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.882852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.883049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.883088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.883281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.883309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.883554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.883585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.883809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.883838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.883991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.884018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.884212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.884241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.884437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.884468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.884631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.884661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.884867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.884913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.885124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.885382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.885413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.885606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.885637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.885858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.886100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.886128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.886323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.886355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.886565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.886607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.886837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.886868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.887056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.887100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.887302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.887332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.887524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.887554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.887776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.887990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.888022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.888282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.888313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.888478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.888517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.888718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.888750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.888986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.825 [2024-07-27 02:32:36.889012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.825 qpair failed and we were unable to recover it. 00:33:08.825 [2024-07-27 02:32:36.889270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.889307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.889530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.889561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.889789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.889831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.890044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.890082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.890304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.890332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.890653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.890682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.890973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.891002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.891218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.891246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.891442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.891469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.891715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.891746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.891967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.891998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.892214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.892257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.892499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.892529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.892730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.892758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.893040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.893079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.893259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.893287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.893468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.893499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.893709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.893736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.893985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.894012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.894200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.894231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.894469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.894502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.894722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.894753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.894979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.895009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.895229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.895424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.895452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.895661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.895692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.895927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.895955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.896163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.896191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.896415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.896445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.896673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.896704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.896924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.896955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.897184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.897213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.897449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.897480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.897673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.897713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.826 [2024-07-27 02:32:36.897956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.826 [2024-07-27 02:32:36.897983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.826 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.898174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.898452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.898484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.898676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.898706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.898928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.898959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.899180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.899208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.899371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.899403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.899629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.899660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.899859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.899897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.900126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.900155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.900414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.900442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.900668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.900698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.900893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.900928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.901101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.901129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.901328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.901358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.901548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.901579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.901779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.901812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.901982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.902010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.902207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.902238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.902469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.902497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.902765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.902799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.902978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.903005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.903242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.903285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.903478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.903506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.903684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.903713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.904688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.904723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.904923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.904953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.905132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.905162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.905330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.905361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.905626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.905655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.905873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.905904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.906111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.906139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.906343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.906386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.906628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.906655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.906844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.906875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.907080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.827 [2024-07-27 02:32:36.907128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.827 qpair failed and we were unable to recover it. 00:33:08.827 [2024-07-27 02:32:36.907891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.907925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.908132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.908162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.908378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.908415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.908611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.908867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.908901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.909087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.909117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.909319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.909354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.909550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.909581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.909810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.909839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.910021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.910050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.910248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.910280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.910481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.910512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.910744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.910774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.910971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.910999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.911208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.911239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.911439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.911469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.911693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.911726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.911959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.911987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.912195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.912226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.912419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.912450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.912650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.912869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.912898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.913146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.913205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.913412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.913443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.913684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.913716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.913917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.914179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.914209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.914423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.914453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.914662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.914690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.914872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.914911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.915170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.915201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.915411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.915440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.915662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.915693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.915877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.915905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.916087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.828 [2024-07-27 02:32:36.916122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.828 qpair failed and we were unable to recover it. 00:33:08.828 [2024-07-27 02:32:36.916363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.916393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.916588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.916620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.916797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.916835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.917071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.917102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.917289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.917318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.917551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.917578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.917785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.917813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.918033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.918067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.918254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.918281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.918511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.918542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.918735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.918763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.918942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.918979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.919167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.919195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.919380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.919407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.919559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.919588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.919827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.919882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.920055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.920115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.920323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.920365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.920562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.920590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.920776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.920813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.921017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.921049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.921280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.921310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.921480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.921508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.921749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.921809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.921988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.922019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.922250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.922280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.922478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.922506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.922684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.922712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.922942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.922981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.923038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dae470 (9): Bad file descriptor 00:33:08.829 [2024-07-27 02:32:36.923279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.829 [2024-07-27 02:32:36.923320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.829 qpair failed and we were unable to recover it. 00:33:08.829 [2024-07-27 02:32:36.923511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.923540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.923740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.923786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.923955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.923999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.924182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.924209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.924363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.924403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.924618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.924662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.924857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.924904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.925146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.925175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.925324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.925352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.925520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.925564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.925769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.925815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.925998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.926222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.926272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.926485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.926530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.926759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.926805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.926957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.926985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.927187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.927232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.927437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.927482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.927698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.927926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.928126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.928157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.928354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.928399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.928585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.928613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.928797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.928826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.928998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.929026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.929248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.929293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.929495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.929525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.929721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.929768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.929950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.929978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.930174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.930219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.930399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.930444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.930676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.930722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.930884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.930913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:08.830 [2024-07-27 02:32:36.931140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.830 [2024-07-27 02:32:36.931185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:08.830 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.931363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.931409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.931641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.931687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.931846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.931874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.932053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.932090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.932268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.932316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.932525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.932571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.932793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.932839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.933019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.933047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.933256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.933301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.933513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.933557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.933755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.933801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.933947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.933975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.934182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.934228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.934429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.934473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.934671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.934716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.934897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.934923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.935149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.935195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.935363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.935413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.935650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.935697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.935878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.935906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.936067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.936096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.936273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.936300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.936503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.936548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.936749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.936795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.936967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.936994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.937197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.937242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.937474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.937520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.937725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.937769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.937947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.937976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.938177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.110 [2024-07-27 02:32:36.938223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.110 qpair failed and we were unable to recover it. 00:33:09.110 [2024-07-27 02:32:36.938416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.938459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.938690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.938735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.938877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.938905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.939116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.939144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.939331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.939374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.939562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.939606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.939801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.939845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.940023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.940050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.940235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.940279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.940489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.940533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.940738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.940782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.940934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.940962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.941156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.941200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.941402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.941445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.941641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.941688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.941860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.941888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.942054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.942089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.942290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.942329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.942549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.942592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.942768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.942816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.942993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.943021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.943209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.943252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.943444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.943488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.943654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.943697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.943852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.943881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.944068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.944108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.944280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.944307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.944489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.944517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.944675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.944704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.944843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.945043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.945078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.945288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.945315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.945487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.945515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.945688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.945716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.111 [2024-07-27 02:32:36.945887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.111 [2024-07-27 02:32:36.945914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.111 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.946100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.946129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.946334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.946362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.946516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.946543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.946715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.946743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.946949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.946977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.947136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.947163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.947380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.947408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.947591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.947619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.947791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.947818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.947975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.948002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.948185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.948213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.948386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.948413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.948586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.948614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.948806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.948847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.949005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.949034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.949213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.949242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.949477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.949507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.949672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.949702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.949924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.949955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.950153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.950186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.950340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.950385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.950583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.950613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.950817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.950861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.951056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.951270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.951297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.951474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.951501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.951649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.951676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.951858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.951885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.952070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.952098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.952253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.952280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.952507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.952537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.952736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.952780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.952974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.953004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.953233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.953261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.953434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.112 [2024-07-27 02:32:36.953461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.112 qpair failed and we were unable to recover it. 00:33:09.112 [2024-07-27 02:32:36.953630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.953660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.954014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.954076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.954297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.954325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.954506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.954536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.954811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.954838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.955040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.955074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.955254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.955281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.955484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.955514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.955697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.955727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.955918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.955948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.956171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.956211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.956357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.956384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.956539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.956582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.956771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.956801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.957004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.957031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.957214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.957242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.957446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.957473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.957657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.957684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.957886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.957917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.958132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.958161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.958336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.958363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.958568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.958599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.958877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.958927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.959125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.959153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.959325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.959353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.959575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.959633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.959819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.960107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.960136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.960294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.960323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.960501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.960529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.960743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.960775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.960949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.960978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.961186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.961214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.961385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.961412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.961591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.961618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.113 qpair failed and we were unable to recover it. 00:33:09.113 [2024-07-27 02:32:36.961791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.113 [2024-07-27 02:32:36.961818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.962027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.962054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.962248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.962276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.962425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.962452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.962657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.962687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.963088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.963154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.963360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.963387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.963565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.963595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.963815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.963845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.964033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.964067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.964238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.964265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.964449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.964476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.964646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.964675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.964876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.964903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.965102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.965130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.965308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.965335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.965555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.965585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.965910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.965969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.966181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.966209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.966365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.966392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.966563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.966593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.966839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.966868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.967087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.967131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.967285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.967312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.967597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.967624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.967860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.967891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.968065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.968096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.968291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.968318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.968465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.968492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.968706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.968752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.968945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.968976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.969201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.969229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.969397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.969427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.969651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.969697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.969894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.114 [2024-07-27 02:32:36.969924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.114 qpair failed and we were unable to recover it. 00:33:09.114 [2024-07-27 02:32:36.970161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.970189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.970353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.970380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.970606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.970636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.970886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.970913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.971098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.971126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.971271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.971299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.971504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.971532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.971687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.971714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.971865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.971893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.972072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.972104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.972253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.972280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.972453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.972480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.972630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.972657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.972857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.972884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.973054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.973103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.973309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.973336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.973483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.973511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.973710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.973740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.973915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.973942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.974117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.974145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.974334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.974364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.974598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.974625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.974841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.974868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.975052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.975089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.975256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.975286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.975483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.975705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.975735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.975948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.975975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.976147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.976175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.976376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.976408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.976695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.115 [2024-07-27 02:32:36.976747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.115 qpair failed and we were unable to recover it. 00:33:09.115 [2024-07-27 02:32:36.976942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.976969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.977131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.977158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.977349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.977380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.977576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.977602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.977789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.977830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.978041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.978081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.978255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.978282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.978461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.978488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.978715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.978744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.978969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.978996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.979195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.979225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.979448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.979477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.979667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.979695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.979904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.979934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.980122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.980152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.980357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.980383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.980554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.980581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.980778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.980833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.981001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.981028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.981190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.981220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.981418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.981448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.981649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.981676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.981850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.981894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.982096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.982123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.982305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.982333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.982531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.982561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.982789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.982816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.982997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.983025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.983186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.983214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.983414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.983443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.983643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.983670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.983853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.983883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.984076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.984274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.984301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.984481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.116 [2024-07-27 02:32:36.984508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.116 qpair failed and we were unable to recover it. 00:33:09.116 [2024-07-27 02:32:36.984656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.984683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.984854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.984881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.985046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.985083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.985282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.985309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.985519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.985546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.985784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.986042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.986075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.986224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.986251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.986426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.986453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.986610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.986636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.986789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.986816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.987042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.987080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.987288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.987315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.987471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.987663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.987694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.987891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.987921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.988145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.988172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.988392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.988419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.988594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.988622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.988792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.988818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.988996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.989023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.989176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.989203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.989379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.989406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.989585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.989614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.989805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.989834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.990033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.990071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.990272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.990302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.990468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.990498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.990691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.990718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.990896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.990923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.991145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.991174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.991378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.991405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.991575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.991602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.991825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.991854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.992051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.992086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.117 [2024-07-27 02:32:36.992241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.117 [2024-07-27 02:32:36.992268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.117 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.992527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.992559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.992796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.992823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.993025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.993055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.993215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.993242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.993416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.993444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.993643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.993673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.993864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.993893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.994120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.994147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.994298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.994325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.994503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.994530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.994672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.994699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.994922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.994952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.995124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.995155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.995351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.995377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.995579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.995609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.995815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.995844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.996021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.996048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.996224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.996254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.996471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.996517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.996722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.996749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.996902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.996929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.997131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.997161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.997332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.997359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.997515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.997542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.997738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.997767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.997933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.997961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.998164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.998195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.998400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.998427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.998629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.998656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.998889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.998923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.999136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.999168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.999344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.999371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.999564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.999593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:36.999815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:36.999845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:37.000017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.118 [2024-07-27 02:32:37.000043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.118 qpair failed and we were unable to recover it. 00:33:09.118 [2024-07-27 02:32:37.000222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.000251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.000444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.000473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.000673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.000700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.000856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.000883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.001064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.001092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.001264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.001291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.001489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.001518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.001713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.001740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.001918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.001945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.002120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.002148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.002327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.002355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.002568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.002594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.002757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.002786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.002974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.003185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.003366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.003569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.003792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.003972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.003999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.004178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.004205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.004387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.004413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.004583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.004614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.004779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.004807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.004990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.005017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.005201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.005228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.005394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.005421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.005581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.005608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.005799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.005827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.006002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.006030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.006236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.006263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.006436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.006464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.006654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.006682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.006875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.006904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.007120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.119 [2024-07-27 02:32:37.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.119 qpair failed and we were unable to recover it. 00:33:09.119 [2024-07-27 02:32:37.007296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.007323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.007531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.007558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.007738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.007765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.007924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.007952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.008133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.008161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.008312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.008340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.008516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.008543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.008695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.008896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.008923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.009128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.009156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.009313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.009341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.009484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.009511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.009688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.009715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.009919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.009946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.010107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.010135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.010300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.010327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.010507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.010534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.010744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.010771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.010918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.010945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.011124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.011151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.011299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.011326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.011505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.011532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.011705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.011731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.011906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.011933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.012152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.012179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.012324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.012351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.012495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.012522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.012697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.012724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.012907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.012934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.013109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.013137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.013293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.013320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.013496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.013523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.013692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.120 [2024-07-27 02:32:37.013719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.120 qpair failed and we were unable to recover it. 00:33:09.120 [2024-07-27 02:32:37.013923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.013950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.014132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.014159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.014307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.014334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.014489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.014516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.014691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.014719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.014886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.014913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.015093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.015120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.015257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.015284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.015464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.015491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.015660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.015688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.015888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.015915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.016072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.016099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.016270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.016298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.016497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.016524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.016734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.016761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.016917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.016945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.017119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.017147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.017323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.017351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.017553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.017580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.017777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.017805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.017979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.018181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.018363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.018565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.018765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.018962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.018989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.019161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.019189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.019344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.019371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.019541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.019568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.019771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.019799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.019948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.019975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.020125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.020152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.020301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.020329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.020531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.020558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.020730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.020757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.121 qpair failed and we were unable to recover it. 00:33:09.121 [2024-07-27 02:32:37.020928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.121 [2024-07-27 02:32:37.020955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.021117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.021145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.021295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.021323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.021474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.021501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.021678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.021705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.021880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.021907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.022050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.022097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.022281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.022309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.022478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.022506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.022681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.022709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.022886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.022913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.023115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.023144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.023323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.023350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.023549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.023576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.023729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.023761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.023956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.023986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.024168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.024196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.024354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.024381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.024579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.024609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.024898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.025150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.025178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.025341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.025369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.025547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.025574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.025720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.025748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.025920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.025948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.026163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.026190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.026344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.026371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.026552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.026578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.026760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.026786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.026943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.026972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.027147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.027174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.027326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.027354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.027556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.027582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.027731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.027758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.027987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.028016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.028246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.122 [2024-07-27 02:32:37.028273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.122 qpair failed and we were unable to recover it. 00:33:09.122 [2024-07-27 02:32:37.028474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.028504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.028780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.028807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.029000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.029027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.029187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.029214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.029371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.029398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.029610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.029641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.029847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.029875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.030109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.030151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.030343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.030369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.030570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.030598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.030806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.030844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.031024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.031051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.031253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.031280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.031476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.031505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.031705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.031732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.031917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.031956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.032174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.032201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.032649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.032681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.032892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.032923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.033125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.033153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.033329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.033356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.033551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.033581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.033788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.033829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.034007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.034034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.034197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.034224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.034373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.034417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.034590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.034620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.034807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.034834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.035030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.035068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.035270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.035482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.035512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.035688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.035717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.035924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.035962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.036149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.036176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.036361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.036388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.036538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.123 [2024-07-27 02:32:37.036565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.123 qpair failed and we were unable to recover it. 00:33:09.123 [2024-07-27 02:32:37.036746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.036774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.036946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.036976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.037187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.037214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.037384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.037414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.037602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.037629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.037806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.037833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.038066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.038111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.038275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.038301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.038455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.038482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.038664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.038897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.038932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.039152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.039179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.039357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.039384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.039537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.039564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.039766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.039793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.039990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.040021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.040201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.040231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.040411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.040438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.040623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.040651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.040842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.040870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.041111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.041139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.041303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.041358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.041517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.041547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.041743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.041770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.041955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.041982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.042203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.042232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.042423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.042449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.042643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.042672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.042869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.042899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.043135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.043161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.043326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.124 [2024-07-27 02:32:37.043383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.124 qpair failed and we were unable to recover it. 00:33:09.124 [2024-07-27 02:32:37.043579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.043609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.043778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.043805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.043978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.044005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.044152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.044179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.044327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.044354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.044521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.044551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.044750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.044784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.044989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.045017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.045186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.045214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.045420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.045449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.045636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.045663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.045819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.045846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.045986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.046013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.046179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.046207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.046401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.046431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.046679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.046709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.046886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.046913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.047108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.047138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.047291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.047319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.047511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.047777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.047808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.048006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.048034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.048203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.048229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.048384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.048411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.048607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.048637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.048830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.048858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.049025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.049055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.049247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.049276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.049438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.049465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.049656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.049687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.049884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.049911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.050094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.050129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.050305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.050364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.050619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.050674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.050874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.050902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.125 qpair failed and we were unable to recover it. 00:33:09.125 [2024-07-27 02:32:37.051066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.125 [2024-07-27 02:32:37.051094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.051284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.051312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.051502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.051529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.051708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.051736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.051894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.051921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.052114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.052142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.052315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.052345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.052567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.052594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.052811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.052838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.053030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.053065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.053242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.053269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.053490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.053517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.053718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.053746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.053980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.054010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.054256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.054283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.054441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.054468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.054613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.054655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.054873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.054900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.055083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.055116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.055281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.055309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.055471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.055498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.055679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.055707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.055872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.055903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.056128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.056155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.056316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.056570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.056597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.056780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.056985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.057012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.126 qpair failed and we were unable to recover it. 00:33:09.126 [2024-07-27 02:32:37.057207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.126 [2024-07-27 02:32:37.057234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.057444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.057471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.057691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.057722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.057938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.057968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.058142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.058169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.058382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.058412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.058636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.058664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.058857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.058884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.059071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.059116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.059305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.059340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.059558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.059586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.059871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.060106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.060336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.060364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.060587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.060618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.060810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.061032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.061070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.061272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.061298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.061460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.061489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.061694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.061721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.061885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.061914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.062092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.062129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.062296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.062333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.062518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.062545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.062725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.062758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.062977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.063004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.063191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.063219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.063415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.063443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.063641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.063668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.063972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.064034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.064253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.064295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.064482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.064511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.064717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.127 [2024-07-27 02:32:37.064749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.127 qpair failed and we were unable to recover it. 00:33:09.127 [2024-07-27 02:32:37.064972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.065021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.065235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.065263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.065441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.065468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.065689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.065717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.065891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.065919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.066107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.066135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.066374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.066402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.066606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.066633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.066848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.066876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.067102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.067161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.067339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.067367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.067604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.067649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.068019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.068082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.068268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.068296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.068504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.068532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.068872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.068926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.069124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.069150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.069330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.069619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.069661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.069858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.069887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.070098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.070127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.070317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.070346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.070546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.070573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.070785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.070833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.071024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.071055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.071294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.071320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.071522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.071552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.071790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.071819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.072043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.072076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.072268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.072296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.072499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.072526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.072716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.072749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.072975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.073005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.073211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.073239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.073422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.073449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.128 qpair failed and we were unable to recover it. 00:33:09.128 [2024-07-27 02:32:37.073667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.128 [2024-07-27 02:32:37.073697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.073932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.073960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.074166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.074195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.074377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.074405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.074599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.074629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.074827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.074855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.075049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.075087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.075281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.075308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.075498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.075526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.075725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.075755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.075951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.075982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.076161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.076189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.076373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.076400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.076576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.076603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.076859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.076886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.077138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.077167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.077392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.077419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.077625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.077652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.077850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.077902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.078068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.078125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.078298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.078327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.078546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.078575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.078800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.078827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.079030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.079075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.079300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.079355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.079560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.079589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.079812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.079839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.080031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.080066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.080274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.080301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.080506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.080533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.080742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.080770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.080998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.081028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.081237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.081264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.081431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.081458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.081641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.081669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.129 [2024-07-27 02:32:37.081870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.129 [2024-07-27 02:32:37.081897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.129 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.082122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.082152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.082351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.082379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.082581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.082609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.082817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.082847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.083075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.083113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.083342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.083369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.083542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.083571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.083785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.083838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.084064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.084092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.084298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.084339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.084539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.084568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.084772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.084987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.085018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.085236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.085505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.085533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.085766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.085796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.085960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.085990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.086222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.086249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.086466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.086496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.086797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.086849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.087050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.087084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.087286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.087316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.087579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.087607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.087820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.087847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.088073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.088111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.088337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.088367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.088580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.088607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.088831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.088866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.089022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.089052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.089261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.089424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.089451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.089654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.089681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.089922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.089949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.090148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.090179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.090357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-07-27 02:32:37.090385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.130 qpair failed and we were unable to recover it. 00:33:09.130 [2024-07-27 02:32:37.090596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.090623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.090852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.090882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.091106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.091136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.091340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.091367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.091565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.091592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.091979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.092266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.092294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.092519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.092891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.092944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.093149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.093176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.093360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.093390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.093588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.093619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.093820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.093848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.094051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.094087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.094314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.094342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.094521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.094548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.094776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.094806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.095025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.095055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.095239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.095265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.095456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.095484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.095687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.095715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.095961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.095988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.096203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.096233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.096436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.096464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.096666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.096693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.096923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.096953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.097156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.097186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.097390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.097417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.097615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.097642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.097867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.097895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.131 [2024-07-27 02:32:37.098114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-07-27 02:32:37.098142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.131 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.098340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.098590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.098621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.098804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.098831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.099005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.099035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.099217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.099247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.099444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.099471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.099648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.099676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.099854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.099882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.100068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.100095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.100244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.100273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.100476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.100503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.100708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.100736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.100941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.100972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.101196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.101227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.101397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.101425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.101578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.101606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.101780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.101808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.102014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.102041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.102279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.102309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.102536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.102563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.102739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.102767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.102993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.103023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.103229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.103260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.103434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.103462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.103684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.103712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.103897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.103925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.104131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.104159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.104355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.104383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.104616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.104646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.104845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.104873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.105075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.105105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.105298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.105328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.105543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.105571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.105724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.105752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.132 [2024-07-27 02:32:37.105970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.132 [2024-07-27 02:32:37.105998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.132 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.106220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.106451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.106481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.106702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.106730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.106881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.106909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.107111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.107154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.107352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.107379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.107555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.107586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.107794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.107825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.108047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.108082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.108255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.108282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.108441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.108471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.108667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.108698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.108935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.108962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.109114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.109141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.109317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.109345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.109527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.109554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.109787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.109816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.109999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.110026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.110205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.110233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.110459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.110487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.110692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.110723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.110888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.110915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.111137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.111165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.111371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.111400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.111601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.111627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.111825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.111855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.112055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.112093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.112272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.112299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.112501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.112531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.112754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.112781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.112959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.112986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.113187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.113218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.113404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.113432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.113635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.113662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.133 [2024-07-27 02:32:37.113852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.133 [2024-07-27 02:32:37.113882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.133 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.114091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.114119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.114274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.114302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.114456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.114483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.114663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.114690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.114902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.114930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.115156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.115413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.115444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.115637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.115664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.115835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.115866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.116051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.116088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.116263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.116290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.116480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.116512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.116662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.116691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.116850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.116878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.117076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.117108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.117299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.117328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.117520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.117547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.117751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.117782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.117974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.118005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.118217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.118245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.118443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.118473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.118675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.118702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.118877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.118905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.119103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.119131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.119334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.119365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.119544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.119572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.119803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.119833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.120030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.120242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.120270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.120465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.120691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.120721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.120953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.120980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.121200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.121229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.121437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.121464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.121639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.134 [2024-07-27 02:32:37.121666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.134 qpair failed and we were unable to recover it. 00:33:09.134 [2024-07-27 02:32:37.121812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.121841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.122038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.122073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.122267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.122294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.122535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.122565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.122769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.122796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.122975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.123002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.123182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.123209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.123356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.123382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.123555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.123582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.123802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.123833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.124036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.124076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.124276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.124530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.124560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.124761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.124789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.124965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.124991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.125184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.125215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.125436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.125471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.125636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.125664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.125889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.125918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.126088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.126116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.126300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.126327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.126560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.126587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.126761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.126788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.126956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.126983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.127180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.127210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.127379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.127410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.127587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.127614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.127785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.127811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.128045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.128082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.128281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.128490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.128518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.128671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.128698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.128872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.128899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.129117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.129147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.135 [2024-07-27 02:32:37.129341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.135 [2024-07-27 02:32:37.129371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.135 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.129574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.129601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.129776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.129803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.129975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.130156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.130332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.130500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.130683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.130937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.130967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.131192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.131223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.131454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.131481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.131678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.131708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.131925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.131955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.132157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.132184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.132341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.132367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.132547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.132573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.132825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.132852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.133082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.133109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.133284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.133311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.133539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.133566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.133729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.133759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.133933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.133964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.134193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.134223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.134456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.134484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.134650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.134680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.134860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.134889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.135044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.135082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.135317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.135530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.135565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.135788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.135819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.136016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.136 [2024-07-27 02:32:37.136275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.136 [2024-07-27 02:32:37.136318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.136 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.136558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.136589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.136786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.136816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.137014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.137042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.137268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.137298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.137497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.137526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.137719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.137747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.137944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.137976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.138151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.138182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.138358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.138386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.138565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.138592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.138797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.138823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.139028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.139054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.139298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.139329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.139525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.139555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.139723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.139750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.139948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.139978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.140170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.140202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.140383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.140410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.140571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.140599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.140802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.140830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.141075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.141274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.141304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.141527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.141557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.141791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.141829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.142029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.142066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.142239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.142270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.142449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.142476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.142677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.142704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.142936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.142965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.143159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.143187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.143401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.143436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.143638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.143669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.143869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.143896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.144130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.144162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.144351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.137 [2024-07-27 02:32:37.144388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.137 qpair failed and we were unable to recover it. 00:33:09.137 [2024-07-27 02:32:37.144555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.144582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.144785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.144829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.145038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.145084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.145268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.145495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.145524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.145719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.145749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.145945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.145972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.146149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.146181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.146352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.146383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.146565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.146594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.146792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.146822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.147001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.147029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.147188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.147216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.147395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.147434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.147596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.147624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.147839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.147867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.148071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.148114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.148322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.148349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.148494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.148522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.148747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.148777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.149010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.149042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.149295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.149323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.149538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.149569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.149765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.149798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.150025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.150055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.150269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.150300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.150471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.150501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.150725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.150752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.150984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.151016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.151228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.151271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.151441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.151467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.151650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.151679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.151867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.151897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.152104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.152131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.152333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.152364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.138 [2024-07-27 02:32:37.152559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.138 [2024-07-27 02:32:37.152608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.138 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.152816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.152844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.153043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.153084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.153281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.153309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.153480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.153507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.153675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.153710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.153935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.153977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.154163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.154190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.154344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.154370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.154562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.154592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.154789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.154816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.155046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.155085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.155264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.155306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.155533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.155561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.155798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.155828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.156048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.156086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.156284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.156311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.156560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.156591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.156851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.156897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.157083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.157114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.157347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.157378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.157607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.157845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.157873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.158077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.158108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.158301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.158333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.158533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.158560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.158717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.158745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.158940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.158970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.159210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.159240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.159454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.159486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.159684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.159712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.159924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.159951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.160118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.160149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.160316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.160345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.160548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.160582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.160764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.139 [2024-07-27 02:32:37.160793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.139 qpair failed and we were unable to recover it. 00:33:09.139 [2024-07-27 02:32:37.161036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.161076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.161279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.161307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.161464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.161492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.161728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.161776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.161977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.162009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.162171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.162198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.162376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.162404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.162658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.162685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.162873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.162906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.163151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.163182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.163420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.163451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.163628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.163670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.164014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.164080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.164292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.164320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.164547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.164861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.164912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.165114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.165142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.165323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.165350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.165598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.165627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.165831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.165859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.166034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.166069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.166238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.166269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.166444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.166473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.166654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.166682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.166863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.166891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.167085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.167113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.167290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.167318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.167711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.167773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.168002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.168030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.168191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.168218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.168448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.168479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.168688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.168715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.168893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.168927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.169126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.169161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.169403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.140 [2024-07-27 02:32:37.169431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.140 qpair failed and we were unable to recover it. 00:33:09.140 [2024-07-27 02:32:37.169645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.169673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.169828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.169855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.170032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.170068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.170224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.170260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.170658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.170926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.170954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.171156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.171186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.171382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.171413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.171573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.171609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.171821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.171858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.172096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.172138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.172350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.172379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.172549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.172579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.172894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.172945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.173147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.173175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.173356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.173384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.173642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.173692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.173893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.173921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.174153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.174184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.174401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.174432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.174654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.174681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.174938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.174966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.175184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.175214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.175415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.175443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.175669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.175699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.175928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.175956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.176134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.176163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.176385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.176415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.176577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.176607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.176842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.176870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.177081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.141 [2024-07-27 02:32:37.177112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.141 qpair failed and we were unable to recover it. 00:33:09.141 [2024-07-27 02:32:37.177306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.177337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.177505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.177533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.177712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.177739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.177969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.178001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.178181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.178211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.178367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.178397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.178590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.178620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.178797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.178825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.179006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.179033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.179255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.179286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.179510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.179539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.179746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.179777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.180011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.180042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.180258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.180286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.180515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.180546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.180881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.180952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.181168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.181197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.181351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.181393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.181628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.181663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.181889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.181930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.182199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.182394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.182422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.182566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.182593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.182804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.183072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.183103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.183289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.183316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.183547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.183578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.183744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.183774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.183960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.184004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.184232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.184264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.184430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.184689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.184718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.184912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.185150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.185181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.185350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.185379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.142 qpair failed and we were unable to recover it. 00:33:09.142 [2024-07-27 02:32:37.185671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.142 [2024-07-27 02:32:37.185701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.185904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.185931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.186127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.186155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.186361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.186393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.186556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.186589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.186788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.186816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.187029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.187057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.187259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.187290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.187479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.187505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.187701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.187729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.187973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.188004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.188180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.188207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.188358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.188386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.188609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.188639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.188833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.188860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.189100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.189132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.189336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.189380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.189552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.189578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.189825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.189855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.190086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.190114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.190297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.190324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.190523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.190553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.190777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.190808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.191023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.191083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.191320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.191350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.191537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.191567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.191757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.191784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.191964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.191995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.192204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.192235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.192448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.192475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.192677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.192707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.192891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.192921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.193181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.193212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.193420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.193451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.193630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.193657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.193842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.143 [2024-07-27 02:32:37.193870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.143 qpair failed and we were unable to recover it. 00:33:09.143 [2024-07-27 02:32:37.194099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.194131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.194353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.194381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.194575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.194602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.194859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.194890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.195065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.195097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.195302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.195331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.195517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.195785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.195813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.196027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.196055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.196284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.196316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.196488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.196518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.196711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.196738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.196975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.197006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.197182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.197216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.197441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.197474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.197683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.197713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.197876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.197907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.198137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.198165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.198366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.198397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.198629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.198661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.198846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.198876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.199080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.199112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.199276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.199306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.199531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.199558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.199734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.199766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.199959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.199991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.200174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.200204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.200476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.200508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.200785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.200815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.201019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.201046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.201244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.201271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.201495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.201529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.201732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.201759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.201915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.201943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.202130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.144 [2024-07-27 02:32:37.202159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.144 qpair failed and we were unable to recover it. 00:33:09.144 [2024-07-27 02:32:37.202337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.202379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.202617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.202648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.202844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.203091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.203119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.203314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.203344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.203575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.203605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.203834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.203862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.204068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.204099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.204316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.204348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.204611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.204642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.204835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.204865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.205149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.205176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.205381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.205408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.205698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.205729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.205926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.205956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.206211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.206239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.206415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.206447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.206649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.206680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.206896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.206924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.207169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.207205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.207386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.207414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.207568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.207594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.207800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.207830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.208024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.208055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.208268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.208296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.208539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.208567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.208780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.208823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.209012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.209039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.209254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.209285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.209478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.209509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.209719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.209745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.209957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.210193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.210221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.210479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.210506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.210788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.145 [2024-07-27 02:32:37.210819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.145 qpair failed and we were unable to recover it. 00:33:09.145 [2024-07-27 02:32:37.211041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.211080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.211312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.211339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.211575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.211604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.211797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.211828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.212039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.212077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.212296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.212324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.212529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.212560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.212793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.212821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.213030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.213067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.213272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.213315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.213503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.213532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.213758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.213788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.214032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.214066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.214281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.214309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.214538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.214570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.214767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.214799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.215043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.215089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.215282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.215312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.215511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.215737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.215765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.215979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.216024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.216258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.216289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.216463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.216491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.216686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.216716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.216942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.216978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.217173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.217202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.217361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.217389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.217579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.217607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.217843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.217871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.218080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.146 [2024-07-27 02:32:37.218109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.146 qpair failed and we were unable to recover it. 00:33:09.146 [2024-07-27 02:32:37.218313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.218341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.218556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.218586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.218793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.218826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.219042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.219082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.219346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.219374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.219611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.219639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.219835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.219866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.220081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.220112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.220312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.220340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.220543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.220587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.220798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.220826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.221003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.221034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.221273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.221301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.221482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.221511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.221738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.221769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.221965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.221996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.222222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.222250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.222485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.222513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.222721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.222768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.222999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.223026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.223254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.223284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.223486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.223517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.223720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.223898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.223927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.224150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.224182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.224406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.224434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.224666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.224696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.224887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.224919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.225143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.225172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.225380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.225412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.225577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.225608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.225798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.225825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.226031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.226068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.226300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.226330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.226556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.147 [2024-07-27 02:32:37.226590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.147 qpair failed and we were unable to recover it. 00:33:09.147 [2024-07-27 02:32:37.226824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.226857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.227086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.227117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.227313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.227340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.227541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.227571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.227731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.227761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.227932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.227960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.228172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.228203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.228413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.228441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.228649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.228677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.228877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.228907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.229138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.229169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.229394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.229421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.229598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.229630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.229827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.229858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.230066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.230095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.230278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.230306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.230484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.230512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.230766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.230794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.230972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.231003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.231200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.231231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.231412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.231439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.231642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.231672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.231869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.231897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.232100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.232129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.232303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.232333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.232538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.232568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.232766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.232794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.232986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.233014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.233215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.233243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.233446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.233474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.233700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.233732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.233972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.234003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.234179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.234208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.234401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.234433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.234627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.148 [2024-07-27 02:32:37.234658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.148 qpair failed and we were unable to recover it. 00:33:09.148 [2024-07-27 02:32:37.234880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.234908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.235105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.235136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.235325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.235355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.235525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.235553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.235736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.235769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.235969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.236000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.236229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.236257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.236427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.236457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.236661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.236692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.236880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.236908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.237083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.237111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.237322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.237352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.237527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.237555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.237752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.237784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.237976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.238007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.238187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.238218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.238411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.238442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.238605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.238635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.238801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.238833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.239032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.239079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.239276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.239303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.239460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.239487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.239735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.239766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.239976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.240007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.240193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.240221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.240423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.240454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.240646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.240677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.240873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.240900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.241113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.241145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.241321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.241353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.241613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.241640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.241822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.241853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.242074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.242103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.242288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.242316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.242555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.242583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.149 [2024-07-27 02:32:37.242821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.149 [2024-07-27 02:32:37.242851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.149 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.243042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.243082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.243271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.243304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.243535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.243566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.243757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.243786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.244019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.244049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.244278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.244308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.244513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.244541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.244735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.244762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.244949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.244981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.245141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.245170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.245340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.245367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.245537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.245563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.245774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.245801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.245995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.246026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.246209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.246241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.246424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.246452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.246650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.246681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.246909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.247143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.247173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.247329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.247359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.247555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.247585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.247853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.247881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.248108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.248140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.248309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.248340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.248534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.248561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.248727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.248757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.248942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.248973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.249210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.249239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.249436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.249477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.249713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.249744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.249971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.250000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.250203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.250234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.250429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.250459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.250688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.250716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.250908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.150 [2024-07-27 02:32:37.250936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.150 qpair failed and we were unable to recover it. 00:33:09.150 [2024-07-27 02:32:37.251126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.151 [2024-07-27 02:32:37.251154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.151 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.251313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.251343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.251576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.251608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.251833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.251865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.252051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.252104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.252408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.252439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.252665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.252695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.252885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.252926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.253115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.253146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.253377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.253408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.253616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.253643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.253856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.253886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.254106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.254135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.254313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.254345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.254547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.254578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.254801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.254832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.255029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.255057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.255234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.255266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.255458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.255490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.255685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.255714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.255918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.255949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.256133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.256163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.256356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.256384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.256563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.256594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.256796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.256827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.256999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.257027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.257228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.257258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.257486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.257516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.257747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.258007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.258038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.258241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.258272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.258483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.258510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.258706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.258733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.258957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.258988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.430 [2024-07-27 02:32:37.259202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.430 [2024-07-27 02:32:37.259232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.430 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.259393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.259422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.259615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.259643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.259837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.259866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.260046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.260081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.260264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.260298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.260538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.260566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.260764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.260795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.260951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.260981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.261190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.261218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.261421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.261464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.261657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.261690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.261898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.261926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.262108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.262139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.262333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.262365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.262590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.262618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.262813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.262843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.263088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.263119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.263297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.263326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.263483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.263516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.263744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.263775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.263968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.263996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.264152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.264182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.264399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.264429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.264630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.264658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.264892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.264923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.265091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.265122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.265300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.265327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.265532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.265575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.265742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.265775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.265978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.266006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.266164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.266190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.266364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.266392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.266587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.266615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.266824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.266854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.267051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.431 [2024-07-27 02:32:37.267091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.431 qpair failed and we were unable to recover it. 00:33:09.431 [2024-07-27 02:32:37.267300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.267329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.267508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.267536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.267741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.267771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.267946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.267976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.268201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.268232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.268420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.268451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.268654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.268682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.268897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.268928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.269126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.269158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.269354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.269381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.269568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.269597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.269871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.269903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.270142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.270170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.270361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.270392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.270611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.270642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.270872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.270914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.271091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.271122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.271321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.271352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.271517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.271544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.271754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.271784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.271977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.272008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.272215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.272243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.272442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.272475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.272694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.272738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.272943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.272976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.273198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.273227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.273416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.273460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.273633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.273661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.273823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.273868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.274066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.274111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.274328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.274356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.274550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.274577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.274956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.275017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.275236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.275264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.432 [2024-07-27 02:32:37.275499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.432 [2024-07-27 02:32:37.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.432 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.275868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.275916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.276100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.276129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.276306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.276337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.276586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.276618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.276848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.276884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.277126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.277158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.277356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.277387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.277603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.277630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.277842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.277886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.278096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.278324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.278352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.278505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.278533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.278761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.278788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.278968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.278995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.279233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.279264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.279513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.279559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.279772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.279802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.279968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.279997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.280175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.280203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.280381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.280408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.280637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.280671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.281104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.281136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.281315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.281342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.281490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.281698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.281725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.281926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.281953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.282179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.282210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.282430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.282463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.282640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.282672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.282896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.282928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.283155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.283188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.283411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.283439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.283624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.283652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.283852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.283917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.433 [2024-07-27 02:32:37.284114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.433 [2024-07-27 02:32:37.284142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.433 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.284302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.284340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.284551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.284597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.284761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.284788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.284984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.285026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.285254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.285285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.285482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.285705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.285735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.285932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.286188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.286216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.286368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.286413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.286662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.286726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.286927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.286955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.287162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.287206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.287464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.287510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.287742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.287772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.288002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.288218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.288252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.288484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.288515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.288702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.288744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.288923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.288953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.289201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.289230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.289451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.289481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.289770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.289798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.289981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.290009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.290213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.290241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.290447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.290479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.290676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.290703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.290901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.290932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.291135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.291164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.434 [2024-07-27 02:32:37.291340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.434 [2024-07-27 02:32:37.291367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.434 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.291595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.291625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.291848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.291876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.292026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.292054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.292271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.292323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.292549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.292579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.292778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.292805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.293037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.293076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.293278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.293308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.293508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.293540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.293777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.294033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.294069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.294240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.294267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.294492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.294522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.294859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.295136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.295165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.295337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.295368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.295584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.295614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.295815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.295843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.296011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.296042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.296256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.296287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.296488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.296517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.296675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.296703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.296872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.296901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.297103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.297132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.297368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.297399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.297589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.297620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.297820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.298049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.298089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.298285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.298312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.298492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.298520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.298694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.298727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.298928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.298959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.299159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.299188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.299390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.435 [2024-07-27 02:32:37.299421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.435 qpair failed and we were unable to recover it. 00:33:09.435 [2024-07-27 02:32:37.299623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.299883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.299913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.300166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.300194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.300431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.300462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.300664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.300692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.300889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.300916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.301140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.301168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.301369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.301396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.301596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.301627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.301839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.301875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.302147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.302176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.302399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.302429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.302758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.302813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.303074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.303121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.303279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.303307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.303506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.303536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.303721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.303765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.303972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.304004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.304211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.304239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.304411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.304439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.304645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.304676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.304877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.304908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.305107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.305135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.305338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.305368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.305561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.305588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.305800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.305828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.305988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.306018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.306216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.306248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.306453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.306483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.306689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.306718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.306946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.306977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.307189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.307218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.307425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.307459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.307635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.307668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.436 [2024-07-27 02:32:37.307896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.436 [2024-07-27 02:32:37.307924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.436 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.308135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.308178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.308428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.308472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.308674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.308704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.309053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.309137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.309340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.309377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.309620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.309648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.309805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.309833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.310065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.310097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.310284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.310312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.310512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.310543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.310818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.310846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.310986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.311175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.311354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.311553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.311735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.311966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.311998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.312208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.312237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.312443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.312475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.312817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.312869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.313073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.313101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.313276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.313308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.313505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.313536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.313757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.313785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.314012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.314043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.314245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.314276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.314505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.314534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.314716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.314760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.314938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.314969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.315164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.315193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.315395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.315425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.315650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.315681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.315919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.315948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.316141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.316174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.316343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.437 [2024-07-27 02:32:37.316374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.437 qpair failed and we were unable to recover it. 00:33:09.437 [2024-07-27 02:32:37.316567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.316595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.316825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.316856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.317050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.317089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.317285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.317313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.317528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.317559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.317882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.317930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.318105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.318134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.318314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.318359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.318607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.318636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.318874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.319052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.319101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.319298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.319329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.319497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.319524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.319672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.319717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.319924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.319956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.320157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.320188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.320427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.320458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.320842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.320905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.321132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.321160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.321334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.321367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.321599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.321851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.321879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.322078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.322126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.322305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.322333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.322546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.322574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.322729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.322759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.322954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.322984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.323216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.323450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.323485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.323686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.323717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.323939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.323967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.324199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.324230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.324452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.324482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.324688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.324716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.324871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.324899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.438 qpair failed and we were unable to recover it. 00:33:09.438 [2024-07-27 02:32:37.325103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.438 [2024-07-27 02:32:37.325149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.325330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.325358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.325579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.325609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.325833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.325863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.326091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.326118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.326316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.326348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.326545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.326576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.326809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.326836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.327020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.327048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.327256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.327287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.327487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.327515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.327711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.327748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.327945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.327975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.328187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.328216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.328426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.328471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.328666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.328696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.328895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.328926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.329150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.329183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.329378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.329409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.329606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.329634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.329811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.329840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.330070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.330101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.330324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.330352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.330560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.330590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.330814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.330842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.331025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.331053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.331308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.331339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.331535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.331565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.331746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.331784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.331968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.331997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.332218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.439 [2024-07-27 02:32:37.332249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.439 qpair failed and we were unable to recover it. 00:33:09.439 [2024-07-27 02:32:37.332472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.332500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.332706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.332737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.332959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.332989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.333199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.333227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.333437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.333469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.333686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.333714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.333919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.333946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.334147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.334179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.334371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.334400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.334602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.334629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.334875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.334902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.335085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.335113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.335336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.335364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.335588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.335847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.335877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.336089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.336117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.336319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.336350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.336570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.336601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.336808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.336836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.337054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.337094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.337289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.337326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.337496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.337524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.337725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.337752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.337963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.337994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.338222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.338250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.338432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.338458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.338660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.338687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.338832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.338860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.339038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.339090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.339249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.339277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.440 [2024-07-27 02:32:37.339492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.440 [2024-07-27 02:32:37.339520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.440 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.339734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.339764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.339936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.339966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.340165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.340192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.340425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.340455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.340624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.340657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.340870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.340897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.341101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.341132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.341325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.341355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.341556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.341586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.341807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.341837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.342053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.342090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.342267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.342303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.342487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.342514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.342717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.342764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.342976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.343004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.343222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.343254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.343453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.343482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.343684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.343711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.343912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.343943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.344152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.344180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.344359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.344388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.344618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.344650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.344867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.344897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.345119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.345147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.345360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.345390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.345562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.345594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.345795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.345825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.346052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.346094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.346303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.346331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.346505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.346781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.346812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.346979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.347009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.347238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.347265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.347484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.347513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.441 [2024-07-27 02:32:37.347706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.441 [2024-07-27 02:32:37.347737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.441 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.347934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.347962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.348116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.348145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.348352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.348396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.348572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.348601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.348829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.348859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.349076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.349107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.349282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.349309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.349500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.349533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.349713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.349744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.349948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.349975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.350152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.350180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.350359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.350390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.350587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.350614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.350846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.350874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.351028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.351055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.351270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.351298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.351498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.351528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.351722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.351752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.351953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.351981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.352166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.352195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.352438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.352490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.352705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.352734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.352939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.353195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.353223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.353376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.353403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.353575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.353607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.353829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.354029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.354057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.354295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.354325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.354526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.354556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.354735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.354762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.354944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.354970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.355176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.355207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.355378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.355407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.355580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.442 [2024-07-27 02:32:37.355613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.442 qpair failed and we were unable to recover it. 00:33:09.442 [2024-07-27 02:32:37.355816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.356029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.356056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.356302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.356335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.356529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.356559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.356759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.356787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.356948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.356976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.357168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.357199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.357397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.357424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.357577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.357606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.357806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.357850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.358046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.358316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.358347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.358568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.358599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.358837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.358865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.359076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.359107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.359285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.359312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.359518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.359545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.359780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.359807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.359983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.360011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.360267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.360295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.360499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.360528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.360748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.360777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.360988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.361015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.361233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.361264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.361481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.361511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.361679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.361710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.361897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.361926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.362127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.362158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.362352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.362383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.362573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.362602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.362810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.362837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.363066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.363117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.363293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.363334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.363535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.363563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.363787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.443 [2024-07-27 02:32:37.363817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.443 qpair failed and we were unable to recover it. 00:33:09.443 [2024-07-27 02:32:37.365253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.365288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.365533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.365564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.365769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.365800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.365995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.366036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.366276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.366307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.366487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.366515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.366830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.366895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.367123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.367151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.367383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.367413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.367628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.367658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.367851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.367878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.368082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.368116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.368333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.368363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.368561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.368589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.368737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.368763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.368971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.369012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.369235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.369266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.369450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.369479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.369880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.369939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.370208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.370236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.370486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.370516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.370770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.370797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.371069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.371098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.371304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.371343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.371544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.371570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.371713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.371739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.371928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.371954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.372150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.372179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.372367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.372393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.372600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.372630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.372823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.444 [2024-07-27 02:32:37.372853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.444 qpair failed and we were unable to recover it. 00:33:09.444 [2024-07-27 02:32:37.373074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.373113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.373320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.373350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.373542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.373821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.373861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.374082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.374114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.374321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.374362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.374554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.374580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.374729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.374755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.374975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.375018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.375204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.375230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.375461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.375490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.375667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.375693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.375899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.375926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.376127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.376159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.376366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.376410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.376652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.376679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.376902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.376946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.377178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.377206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.377376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.377409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.377642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.377673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.377871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.377902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.378132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.378159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.378375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.378415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.378661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.378690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.378903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.378931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.379119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.379150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.379362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.379392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.379613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.379640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.379846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.379873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.380088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.380124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.380330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.380358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.380585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.380616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.445 [2024-07-27 02:32:37.380882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.445 [2024-07-27 02:32:37.380930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.445 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.381131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.381159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.381362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.381392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.381578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.381613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.381840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.381867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.382038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.382076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.382281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.382311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.382559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.382585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.382847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.382877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.383095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.383122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.383299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.383330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.383551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.383581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.383788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.383815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.384002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.384031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.384316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.384358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.384568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.384597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.384778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.384804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.384990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.385029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.385251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.385293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.385469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.385496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.385704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.385734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.385946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.385982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.386179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.386207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.386415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.386458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.386665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.386695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.386896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.386924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.387105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.387135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.387364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.387396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.387595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.387623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.387884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.387914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.388145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.388173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.388391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.388417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.388657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.388688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.388916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.388960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.389194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.389226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.389456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.446 [2024-07-27 02:32:37.389487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.446 qpair failed and we were unable to recover it. 00:33:09.446 [2024-07-27 02:32:37.389711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.389760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.389942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.389973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.390175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.390203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.390413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.390441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.390644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.390675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.390885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.390921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.391149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.391177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.391389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.391416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.391633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.391663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.391891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.391918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.392095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.392122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.392360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.392390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.392578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.392605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.392812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.392842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.393002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.393033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.393291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.393317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.393519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.393549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.393769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.393798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.394006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.394033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.394224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.394255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.394459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.394489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.394713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.394740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.394944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.394974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.395135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.395166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.395346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.395575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.395610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.395830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.395860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.396088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.396116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.396363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.396392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.396592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.396783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.396809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.397014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.397044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.397277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.397512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.397540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.397765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.447 [2024-07-27 02:32:37.397795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.447 qpair failed and we were unable to recover it. 00:33:09.447 [2024-07-27 02:32:37.397988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.398015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.398179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.398207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.398453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.398483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.398683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.398725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.398965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.398992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.399196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.399226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.399402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.399432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.399639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.399665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.399869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.399899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.400126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.400157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.400390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.400417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.400621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.400650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.400845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.400874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.401099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.401134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.401396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.401423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.401648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.401677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.401875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.401902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.402066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.402093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.402281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.402308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.402502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.402528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.402740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.402769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.402958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.402988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.403218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.403246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.403457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.403487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.403680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.403709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.403970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.403996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.404250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.404281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.404515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.404545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.404740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.404767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.404949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.404976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.405170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.405204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.405386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.405413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.405601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.405631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.405824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.405854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.406049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.406110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.448 qpair failed and we were unable to recover it. 00:33:09.448 [2024-07-27 02:32:37.406349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.448 [2024-07-27 02:32:37.406379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.406605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.406634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.406839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.406866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.407040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.407081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.407269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.407299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.407536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.407563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.407767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.407797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.407990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.408020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.408261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.408288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.408502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.408539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.408740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.408778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.408985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.409012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.409203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.409235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.409447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.409477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.409676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.409703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.409872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.409903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.410106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.410137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.410338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.410365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.410566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.410595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.410767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.410795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.411000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.411027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.411258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.411502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.411784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.411813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.412041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.412081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.412293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.412529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.412556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.412751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.412781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.413096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.413168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.413345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.413372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.449 [2024-07-27 02:32:37.413598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.449 [2024-07-27 02:32:37.413627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.449 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.413798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.413828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.414028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.414055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.414263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.414293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.414473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.414503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.414731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.414763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.414941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.414971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.415196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.415226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.415429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.415455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.415631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.415658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.415988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.416053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.416265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.416292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.416520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.416550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.416807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.416834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.417037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.417069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.417248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.417274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.417473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.417503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.417698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.417725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.417879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.417906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.418110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.418141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.418320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.418347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.418522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.418549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.418748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.418778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.418969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.418996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.419221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.419251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.419427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.419456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.419653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.419680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.419880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.419910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.420110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.420140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.420300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.420327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.420549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.420580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.420983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.421034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.421222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.421250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.421446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.421476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.421815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.421869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.450 [2024-07-27 02:32:37.422082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.450 [2024-07-27 02:32:37.422117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.450 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.422336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.422365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.422696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.422759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.422952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.422979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.423173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.423203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.423402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.423431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.423636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.423663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.423861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.423888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.424057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.424096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.424298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.424327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.424527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.424570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.424958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.425011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.425229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.425257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.425476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.425506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.425747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.425774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.425951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.425978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.426174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.426203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.426400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.426430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.426608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.426635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.426840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.426869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.427066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.427096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.427273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.427299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.427520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.427550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.427802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.427854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.428057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.428094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.428325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.428354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.428581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.428610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.428786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.428814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.429054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.429101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.429261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.429290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.429493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.429520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.429709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.429738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.429932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.429961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.430172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.430199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.430410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.430453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.430674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.451 [2024-07-27 02:32:37.430703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.451 qpair failed and we were unable to recover it. 00:33:09.451 [2024-07-27 02:32:37.430876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.430904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.431094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.431121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.431328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.431362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.431609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.431636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.431864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.431893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.432099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.432129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.432321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.432348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.432512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.432542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.432738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.432765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.432950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.432977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.433153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.433343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.433373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.433596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.433623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.433850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.433880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.434087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.434126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.434303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.434333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.434553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.434583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.434782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.434812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.435044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.435076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.435311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.435348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.435501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.435531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.435738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.435938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.435967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.436177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.436207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.436411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.436438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.436670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.436699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.436930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.436967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.437161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.437189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.437367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.437397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.437554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.437583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.437783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.438034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.438071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.438272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.438302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.438501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.438528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.438729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.438759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.452 qpair failed and we were unable to recover it. 00:33:09.452 [2024-07-27 02:32:37.438995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.452 [2024-07-27 02:32:37.439021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.439192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.439219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.439393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.439420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.439592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.439619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.439770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.439797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.439951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.439978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.440185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.440215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.440405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.440432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.440629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.440880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.440909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.441081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.441116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.441312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.441351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.441526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.441749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.441775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.441974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.442003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.442213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.442241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.442428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.442455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.442626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.442656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.442866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.442892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.443041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.443076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.443296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.443336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.443564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.443800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.443827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.444047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.444086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.444253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.444282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.444483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.444510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.444682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.444712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.444911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.444941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.445123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.445150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.445309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.445347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.445523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.445550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.445693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.445721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.445944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.445973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.446170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.446200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.446411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.446438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.453 [2024-07-27 02:32:37.446639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.453 [2024-07-27 02:32:37.446668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.453 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.446828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.446857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.447084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.447113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.447321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.447350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.447575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.447602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.447756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.447783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.447932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.447958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.448111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.448138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.448316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.448343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.448558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.448785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.448814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.449008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.449038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.449251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.449282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.449464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.449492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.449670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.449697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.449895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.449925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.450123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.450154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.450353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.450380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.450565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.450592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.450791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.450821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.451015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.451042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.451279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.451308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.451505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.451533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.451710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.451737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.451921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.451950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.452130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.452161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.452341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.452368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.452547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.452574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.452786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.452813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.452995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.453206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.453234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.454 qpair failed and we were unable to recover it. 00:33:09.454 [2024-07-27 02:32:37.453427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.454 [2024-07-27 02:32:37.453456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.453682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.453709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.453911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.453939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.454151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.454182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.454381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.454408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.454558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.454585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.454740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.454768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.454950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.454977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.455148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.455175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.455354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.455380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.455565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.455592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.455767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.455794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.455970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.455997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.456225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.456252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.456479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.456509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.456720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.456748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.456951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.456978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.457184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.457215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.457412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.457441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.457618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.457645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.457829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.457859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.458039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.458250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.458453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.458853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.458998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.459026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.459233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.459265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.459477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.459504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.459708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.459737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.459931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.459960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.460164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.460192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.460368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.460394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.460601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.460631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.460863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.460890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.455 [2024-07-27 02:32:37.461042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.455 [2024-07-27 02:32:37.461076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.455 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.461323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.461352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.461584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.461610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.461802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.461831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.462015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.462042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.462239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.462266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.462476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.462503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.462696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.462727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.462926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.463153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.463183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.463356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.463386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.463609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.463636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.463794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.463821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.464020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.464050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.464277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.464305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.464521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.464551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.464743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.464772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.464988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.465014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.465190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.465218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.465358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.465385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.465538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.465565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.465741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.465768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.465969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.466000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.466185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.466213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.466444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.466473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.466667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.466701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.466896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.466923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.467149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.467179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.467410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.467439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.467621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.467648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.467825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.467852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.468047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.468085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.468281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.468308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.468463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.468490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.468666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.468693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.456 qpair failed and we were unable to recover it. 00:33:09.456 [2024-07-27 02:32:37.468895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.456 [2024-07-27 02:32:37.468922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.469133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.469161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.469356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.469386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.469560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.469587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.469788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.469829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.469994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.470024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.470227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.470255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.470449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.470478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.470677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.470706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.470909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.470937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.471111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.471138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.471362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.471392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.471632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.471659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.471858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.472107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.472136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.472368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.472395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.472599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.472626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.472833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.472877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.473107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.473133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.473341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.473371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.473563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.473592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.473819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.473846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.474053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.474090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.474262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.474291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.474501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.474746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.474772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.474928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.474954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.475108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.475136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.475340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.475369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.475591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.475620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.475853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.476114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.476348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.476378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.476555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.476584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.476811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.457 [2024-07-27 02:32:37.476840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.457 qpair failed and we were unable to recover it. 00:33:09.457 [2024-07-27 02:32:37.477039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.477078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.477281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.477307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.477493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.477520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.477758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.477787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.477975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.478001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.478199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.478229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.478420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.478449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.478648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.478675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.478941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.479178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.479206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.479386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.479412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.479582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.479608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.479821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.479848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.480026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.480053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.480288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.480318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.480518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.480547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.480727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.480753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.480898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.480925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.481145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.481175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.481388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.481415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.481652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.481678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.481870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.481899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.482081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.482108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.482308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.482350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.482548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.482577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.482780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.482808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.483015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.483046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.483241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.483271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.483482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.483509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.483713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.483739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.483954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.483984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.484211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.484238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.484434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.484464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.484685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.484714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.484912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.484938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.458 [2024-07-27 02:32:37.485107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.458 [2024-07-27 02:32:37.485142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.458 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.485313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.485344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.485536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.485562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.485738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.485765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.485948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.485976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.486162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.486190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.486391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.486434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.486636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.486662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.486874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.486901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.487098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.487140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.487322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.487354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.487554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.487581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.487806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.487836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.488024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.488052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.488289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.488316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.488550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.488580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.488749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.488779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.488972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.489002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.489199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.489227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.489412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.489618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.489645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.489853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.489883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.490066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.490093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.490264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.490291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.490454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.490482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.490711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.490742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.490923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.490950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.491155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.491186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.491381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.491411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.491629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.491656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.491810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.491837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.459 [2024-07-27 02:32:37.492011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.459 [2024-07-27 02:32:37.492038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.459 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.492269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.492310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.492497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.492526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.492731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.492776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.492955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.492999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.493199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.493227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.493377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.493403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.493600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.493874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.493917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.494109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.494141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.494333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.494382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.494606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.494651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.494859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.494906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.495111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.495138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.495344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.495391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.495564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.495609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.495819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.495862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.496041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.496076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.496301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.496349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.496738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.496788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.496989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.497016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.497206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.497234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.497416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.497460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.497629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.497673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.497902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.497946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.498122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.498150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.498353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.498397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.498603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.498647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.498978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.499024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.499214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.499241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.499447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.499492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.499808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.499865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.500037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.500069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.500306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.500352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.500569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.500611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.500820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.460 [2024-07-27 02:32:37.500864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.460 qpair failed and we were unable to recover it. 00:33:09.460 [2024-07-27 02:32:37.501079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.501111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.501336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.501379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.501598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.501851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.501896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.502093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.502144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.502316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.502365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.502597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.502643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.502848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.502875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.503018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.503046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.503264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.503308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.503520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.503564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.503782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.503827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.504007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.504034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.504244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.504294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.504532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.504576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.504791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.504834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.504985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.505012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.505216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.505261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.505495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.505842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.505901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.506077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.506117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.506267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.506293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.506498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.506543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.506763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.506806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.507017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.507043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.507233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.507260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.507445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.507490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.507726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.507771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.507954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.507981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.508138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.508167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.508364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.508410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.508772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.508828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.509003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.509031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.509264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.509309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.509514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.461 [2024-07-27 02:32:37.509558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.461 qpair failed and we were unable to recover it. 00:33:09.461 [2024-07-27 02:32:37.509731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.509776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.509959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.509986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.510189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.510438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.510482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.510728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.510773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.511014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.511042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.511256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.511283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.511477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.511520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.511753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.511797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.511964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.511991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.512167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.512195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.512367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.512412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.512641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.512686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.512933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.513170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.513387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.513433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.513666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.513711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.513981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.514008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.514224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.514277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.514495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.514540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.514757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.514802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.515002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.515046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.515306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.515350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.515601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.515646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.515835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.515861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.516045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.516082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.516330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.516373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.516635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.516679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.516919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.516945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.517168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.517452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.517496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.517743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.517787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.518020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.518047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.518265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.518292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.518508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.518538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.462 qpair failed and we were unable to recover it. 00:33:09.462 [2024-07-27 02:32:37.518795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.462 [2024-07-27 02:32:37.518839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.519040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.519074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.519254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.519281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.519484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.519528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.519738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.519781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.519956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.519982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.520164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.520192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.520429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.520473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.520704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.520748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.520956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.520984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.521148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.521176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.521406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.521451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.521627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.521675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.521856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.521884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.522070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.522098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.522279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.522322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.522495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.522542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.522772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.522817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.522997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.523025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.523240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.523462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.523508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.523691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.523736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.523912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.523939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.524142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.524190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.524385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.524430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.524632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.524676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.524833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.524861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.525068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.525096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.525330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.525372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.525615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.525660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.525891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.525935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.526138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.526166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.526341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.463 [2024-07-27 02:32:37.526387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.463 qpair failed and we were unable to recover it. 00:33:09.463 [2024-07-27 02:32:37.526602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.526631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.526863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.526908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.527168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.527199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.527426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.527454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.527661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.527707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.527912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.527940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.528160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.528206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.528411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.528687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.528732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.528910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.528938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.529130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.529175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.529374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.529430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.529610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.529638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.529846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.529874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.530046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.530083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.530301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.530346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.530540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.530568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.530794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.530840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.530995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.531023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.531241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.531287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.531494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.531771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.531816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.531996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.532023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.532256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.532300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.532515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.532560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.532731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.532777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.532981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.533008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.533218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.533245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.533412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.533457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.533657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.464 [2024-07-27 02:32:37.533701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.464 qpair failed and we were unable to recover it. 00:33:09.464 [2024-07-27 02:32:37.533877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.533908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.534115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.534159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.534357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.534403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.534608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.534653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.534828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.534857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.535070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.535098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.535297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.535348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.535556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.535601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.535794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.535838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.536014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.536043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.536269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.536296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.536508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.536554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.536747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.536777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.536972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.537193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.537221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.537401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.537446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.537675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.537719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.537899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.537930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.538122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.538153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.538370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.538415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.538582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.538786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.538814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.538992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.539021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.539251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.539280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.539493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.539538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.539749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.539793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.539941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.539969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.540181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.540226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.540406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.540451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.540684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.540728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.540992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.541019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.541252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.541299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.541535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.541580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.541851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.541896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.542084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.542119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.465 qpair failed and we were unable to recover it. 00:33:09.465 [2024-07-27 02:32:37.542346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.465 [2024-07-27 02:32:37.542391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.542587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.542843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.542871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.543065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.543093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.543306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.543334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.543555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.543847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.543892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.544102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.544131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.544330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.544373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.544555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.544600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.544802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.544846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.545035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.545085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.545256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.545282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.545498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.545543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.545777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.545821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.546025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.546053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.546250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.546278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.546523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.546569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.546776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.546821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.546976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.547004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.547171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.547199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.547411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.547456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.547698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.547742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.547901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.547940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.548143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.548189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.548436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.548724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.548768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.548952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.549006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.549211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.549256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.549448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.549479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.549701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.549745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.549955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.549983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.550236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.550283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.550466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.550497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.550681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.466 [2024-07-27 02:32:37.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.466 qpair failed and we were unable to recover it. 00:33:09.466 [2024-07-27 02:32:37.551056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.551139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.551344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.551387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.551586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.551615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.551982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.552033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.552251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.552278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.552503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.552533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.552875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.552928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.553140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.553167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.553368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.553396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.553731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.553790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.553961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.553991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.554230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.554257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.554460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.554487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.554728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.554772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.554970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.554997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.555187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.555215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.555427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.555458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.555653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.555683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.556038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.556120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.556330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.556357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.556599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.556628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.556799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.556829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.557023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.557054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.557237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.557264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.557519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.557554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.557745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.557775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.557947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.557976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.558158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.558184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.558388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.558418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.558613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.558643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.558864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.558894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.559119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.559146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.559330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.559375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.559738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.559794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.560000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.467 [2024-07-27 02:32:37.560027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.467 qpair failed and we were unable to recover it. 00:33:09.467 [2024-07-27 02:32:37.560203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.560230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.560466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.560494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.560686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.560716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.561084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.561158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.561350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.561377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.561586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.561618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.561821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.561851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.562024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.562052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.562263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.562289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.562489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.562520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.562800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.562854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.563053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.563091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.563287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.563313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.563509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.563536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.563712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.563742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.563903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.563933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.564134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.564166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.564360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.564585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.564617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.564807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.564878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.565127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.565155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.565336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.565382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.565579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.565606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.565805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.565835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.566029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.566072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.566304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.566331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.566513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.566543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.566728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.566758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.566946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.567187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.567216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.567423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.567455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.567711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.567741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.567969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.567999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.568227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.568255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.568433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.568461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.568633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.568663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.468 qpair failed and we were unable to recover it. 00:33:09.468 [2024-07-27 02:32:37.568857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.468 [2024-07-27 02:32:37.568887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.569083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.569121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.569285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.569313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.569537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.569567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.569744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.569772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.569972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.570002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.570195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.570222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.570397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.570425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.570606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.570634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.570835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.570865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.571039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.571075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.571254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.571281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.469 qpair failed and we were unable to recover it. 00:33:09.469 [2024-07-27 02:32:37.571510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.469 [2024-07-27 02:32:37.571541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.571767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.571796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.572027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.572077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.572280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.572309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.572513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.572541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.572744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.572774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.572968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.572998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.573167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.573196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.573418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.573448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.573629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.573659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.573824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.573852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.574024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.574054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.574270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.574300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.574497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.574525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.574689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.574718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.574909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.574939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.575133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.575160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.575317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.575533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.575563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.575767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.575794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.576014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.576045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.576216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.576247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.576464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.576491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.748 qpair failed and we were unable to recover it. 00:33:09.748 [2024-07-27 02:32:37.576652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.748 [2024-07-27 02:32:37.576680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.576878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.576908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.577072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.577100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.577270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.577301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.577507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.577534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.577711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.577738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.577909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.577936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.578158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.578189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.578423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.578451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.578644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.578674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.578896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.578926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.579148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.579176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.579330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.579358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.579528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.579559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.579742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.579770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.579945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.579979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.580192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.580220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.580400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.580598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.580629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.580846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.580877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.581103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.581131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.581346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.581376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.581535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.581566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.581740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.581767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.581921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.581948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.582171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.582201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.582399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.582427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.582622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.582652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.582849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.582880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.583104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.583132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.583303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.583332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.583526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.583556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.583782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.583809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.583980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.584010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.584206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.584234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.584413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.749 [2024-07-27 02:32:37.584441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.749 qpair failed and we were unable to recover it. 00:33:09.749 [2024-07-27 02:32:37.584622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.584649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.584873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.584903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.585138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.585166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.585368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.585398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.585614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.585824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.585852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.586025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.586052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.586248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.586278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.586472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.586500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.586724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.586754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.586931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.586958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.587138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.587167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.587377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.587408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.587570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.587600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.587795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.587823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.588022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.588052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.588294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.588321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.588534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.588562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.588773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.588803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.589027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.589071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.589249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.589277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.589453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.589483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.589679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.589708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.589901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.589929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.590134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.590178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.590372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.590403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.590625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.590652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.590844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.590874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.591093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.591123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.591334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.591362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.591540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.591569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.591798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.591833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.592051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.592085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.592263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.592296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.592488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.592518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.750 [2024-07-27 02:32:37.592695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.750 [2024-07-27 02:32:37.592722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.750 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.592946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.592976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.593173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.593203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.593431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.593459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.593663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.593693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.593891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.593922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.594127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.594156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.594331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.594358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.594587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.594615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.594793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.594821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.595029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.595068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.595289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.595319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.595542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.595570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.595772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.595803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.595992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.596211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.596239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.596414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.596442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.596642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.596673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.596871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.596898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.597072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.597100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.597275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.597302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.597504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.597532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.597731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.597761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.597975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.598005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.598224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.598254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.598435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.598465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.598657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.598687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.598859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.598887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.599085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.599116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.599318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.599346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.599512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.599540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.599739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.599769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.599927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.599957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.600185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.600212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.600388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.600418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.600626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.751 [2024-07-27 02:32:37.600656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.751 qpair failed and we were unable to recover it. 00:33:09.751 [2024-07-27 02:32:37.600884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.600911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.601117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.601147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.601366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.601392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.601575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.601601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.601780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.601808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.602013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.602042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.602248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.602275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.602424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.602451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.602601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.602628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.602798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.602825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.603041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.603080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.603301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.603335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.603510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.603537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.603711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.603738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.603927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.603957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.604159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.604187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.604381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.604410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.604605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.604635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.604837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.604864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.605067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.605119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.605341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.605370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.605543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.605570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.605793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.605823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.606011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.606041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.606226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.606252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.606454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.606483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.606680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.606710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.606929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.607129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.607162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.607385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.607414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.607641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.607668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.607847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.607876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.608066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.608106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.608287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.608319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.608476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.608502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.752 [2024-07-27 02:32:37.608727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.752 [2024-07-27 02:32:37.608756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.752 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.608956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.608982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.609137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.609164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.609317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.609343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.609523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.609550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.609737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.609767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.609950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.609980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.610210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.610237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.610473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.610502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.610727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.610757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.610926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.610952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.611132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.611371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.611401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.611592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.611620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.611816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.611845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.612012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.612042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.612250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.612277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.612441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.612470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.612669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.612698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.612897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.612923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.613125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.613160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.613320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.613351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.613573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.613600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.613761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.613791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.613979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.614008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.614233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.614260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.614437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.614469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.614687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.614717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.615375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.615409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.615608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.615638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.753 qpair failed and we were unable to recover it. 00:33:09.753 [2024-07-27 02:32:37.615844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.753 [2024-07-27 02:32:37.615871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.616075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.616103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.616315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.616344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.616573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.616603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.616776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.616804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.616957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.616984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.617171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.617199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.617416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.617442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.617655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.617683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.617875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.617904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.618077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.618311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.618337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.618536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.618563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.618741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.618767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.618950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.618976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.619178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.619208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.619377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.619404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.619587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.619616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.619800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.619827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.620003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.620029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.620200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.620226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.620392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.620422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.620594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.620621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.620794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.620820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.621043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.621082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.621266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.621293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.621503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.621532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.621745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.621774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.621993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.622022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.622206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.622233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.622389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.622415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.622576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.622617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.622807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.622853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.623033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.623067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.623249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.623276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.623479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.754 [2024-07-27 02:32:37.623506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.754 qpair failed and we were unable to recover it. 00:33:09.754 [2024-07-27 02:32:37.623697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.623742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1187905 Killed "${NVMF_APP[@]}" "$@" 00:33:09.755 [2024-07-27 02:32:37.624078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.624131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:09.755 [2024-07-27 02:32:37.624336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.624363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:09.755 [2024-07-27 02:32:37.624545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.624589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:09.755 [2024-07-27 02:32:37.624796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.624842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:09.755 [2024-07-27 02:32:37.625022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.625218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.625245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.625476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.625521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.625727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.625770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.625973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.626000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.626187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.626214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.626399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.626443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.626675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.626720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.626901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.626941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.627155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.627201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.627404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.627447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.627692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.627736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.627924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.627951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1188459 00:33:09.755 [2024-07-27 02:32:37.628129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:09.755 [2024-07-27 02:32:37.628174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1188459 00:33:09.755 [2024-07-27 02:32:37.628378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.628423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1188459 ']' 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.755 [2024-07-27 02:32:37.628637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.755 [2024-07-27 02:32:37.628833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.628861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:09.755 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:09.755 [2024-07-27 02:32:37.629070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.629097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.629272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.629299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.629488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.629536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.629769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.629811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.629995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.630022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.755 qpair failed and we were unable to recover it. 00:33:09.755 [2024-07-27 02:32:37.630279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.755 [2024-07-27 02:32:37.630324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.630555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.630585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.630834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.630879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.631027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.631073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.631261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.631500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.631530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.631779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.631823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.631972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.631999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.632174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.632202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.632396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.632439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.632679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.632723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.632885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.632912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.633132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.633386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.633439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.633689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.633732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.633940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.633966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.634145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.634189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.634404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.634432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.634634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.634679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.634890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.634916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.635122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.635364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.635409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.635621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.635650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.635872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.635899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.636078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.636110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.636366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.636412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.636613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.636657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.636836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.636871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.637025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.637052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.637261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.637306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.637498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.637542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.637741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.637786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.637942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.637971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.638145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.638189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.638418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.638463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.756 qpair failed and we were unable to recover it. 00:33:09.756 [2024-07-27 02:32:37.638660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.756 [2024-07-27 02:32:37.638705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.638856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.638884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.639070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.639098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.639276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.639321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.639515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.639560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.639720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.639747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.639959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.639986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.640187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.640233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.640436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.640479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.640651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.640695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.640880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.640908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.641104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.641135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.641347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.641391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.641565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.641609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.641788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.641815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.641970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.641998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.642220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.642265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.642467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.642512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.642744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.642789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.642998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.643025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.643248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.643293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.643515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.643559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.643765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.643809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.644014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.644041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.644224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.644269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.644482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.644509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.644712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.644756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.644912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.644938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.645104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.645135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.645296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.645324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.645551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.645595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.645742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.645947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.645979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.646184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.646229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.646442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.757 [2024-07-27 02:32:37.646485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.757 qpair failed and we were unable to recover it. 00:33:09.757 [2024-07-27 02:32:37.646713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.646757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.646934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.646962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.647187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.647232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.647463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.647509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.647683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.647729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.647911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.647938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.648162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.648206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.648412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.648455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.648632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.648681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.648834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.648861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.649070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.649097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.649300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.649345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.649522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.649568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.649779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.649806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.649981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.650008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.650223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.650268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.650465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.650509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.650706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.650750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.650905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.650933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.651160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.651205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.651430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.651474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.651680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.651724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.651925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.651951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.652155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.652182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.652388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.652415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.652644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.652687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.652894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.652921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.653111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.653141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.653384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.653428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.653640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.758 [2024-07-27 02:32:37.653683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.758 qpair failed and we were unable to recover it. 00:33:09.758 [2024-07-27 02:32:37.653863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.653889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.654076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.654104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.654306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.654350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.654556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.654599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.654828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.654872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.655030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.655056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.655269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.655314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.655494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.655543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.655786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.655830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.656011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.656249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.656279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.656528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.656573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.656742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.656787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.656966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.657198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.657244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.657440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.657485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.657711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.657754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.657934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.657961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.658140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.658185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.658392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.658436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.658675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.658718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.658933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.659159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.659208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.659410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.659453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.659672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.659700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.659878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.659906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.660111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.660365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.660409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.660578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.660622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.660858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.661089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.661117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.661289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.661333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.661558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.661601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.661857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.661902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.662110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.759 [2024-07-27 02:32:37.662137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.759 qpair failed and we were unable to recover it. 00:33:09.759 [2024-07-27 02:32:37.662332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.662377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.662612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.662832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.662876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.663052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.663085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.663316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.663360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.663531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.663575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.663785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.663830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.663978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.664006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.664247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.664291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.664511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.664555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.664783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.664827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.665003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.665031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.665254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.665286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.665505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.665549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.665756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.665801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.666001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.666027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.666255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.666299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.666525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.666568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.666741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.666770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.666988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.667015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.667216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.667261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.667497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.667542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.667774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.667817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.668025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.668052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.668265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.668294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.668529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.668574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.668791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.668835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.669037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.669070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.669296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.669340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.669529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.669574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.669807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.669851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.670073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.670101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.670276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.670319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.670546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.670590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.670766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.670796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.760 [2024-07-27 02:32:37.670967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.760 [2024-07-27 02:32:37.670993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.760 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.671210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.671240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.671439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.671483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.671717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.671761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.671957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.671998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.672160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.672189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.672394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.672424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.672643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.672672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.672877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.672906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.673071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.673115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.673258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.673301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.673529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.673558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.673754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.673784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.674136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.674164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.674309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.674335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.674564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.674593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.674851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.674881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.674977] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:33:09.761 [2024-07-27 02:32:37.675053] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.761 [2024-07-27 02:32:37.675074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.675294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.675318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.675527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.675556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.675753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.675783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.676002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.676031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.676240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.676267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.676492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.676520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.676683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.676712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.677007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.677076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.677292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.677318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.677523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.677553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.677748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.677969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.677998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.678206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.678233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.678391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.678419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.678626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.678656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.678853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.678883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.679144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.679172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.761 qpair failed and we were unable to recover it. 00:33:09.761 [2024-07-27 02:32:37.679360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.761 [2024-07-27 02:32:37.679390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.679562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.679591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.679811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.679841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.680032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.680069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.680294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.680320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.680503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.680530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.680721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.680751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.680942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.680972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.681169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.681197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.681408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.681437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.681682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.681711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.682113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.682141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.682370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.682399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.682675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.682705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.682960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.683173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.683201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.683398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.683428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.683656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.683685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.683883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.683912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.684145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.684172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.684356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.684383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.684586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.684616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.684822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.684864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.685081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.685309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.685352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.685550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.685580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.685785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.685815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.685993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.686242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.686269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.686418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.686444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.686615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.686823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.686849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.687071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.687098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.687481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.687511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.687742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.687773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.687986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.688013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.762 qpair failed and we were unable to recover it. 00:33:09.762 [2024-07-27 02:32:37.688203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.762 [2024-07-27 02:32:37.688230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.688428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.688456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.688638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.688664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.688838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.688865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.689044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.689076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.689298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.689324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.689510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.689550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.689813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.689839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.690025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.690054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.690279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.690305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.690483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.690512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.690708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.690734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.690890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.690918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.691137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.691397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.691423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.691618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.691648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.691819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.691845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.692000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.692028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.692239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.692269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.692467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.692722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.692748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.692952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.692982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.693189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.693216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.693472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.693498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.693732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.693759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.694013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.694039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.694255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.694282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.694526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.694552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.694698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.694725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.694884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.694910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.695182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.695213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.763 [2024-07-27 02:32:37.695415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.763 [2024-07-27 02:32:37.695441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.763 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.695622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.695649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.695852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.695881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.696068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.696094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.696351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.696377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.696619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.696645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.696870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.696899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.697158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.697185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.697384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.697413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.697636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.697899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.697926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.698159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.698189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.698409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.698438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.698666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.698692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.698924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.698952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.699219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.699245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.699436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.699463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.699657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.699686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.699886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.699913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.700096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.700123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.700328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.700364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.700584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.700614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.700799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.700826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.701013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.701040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.701303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.701329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.701489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.701517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.701743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.701772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.701997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.702026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.702232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.702259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.702409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.702435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.702585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.702611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.702793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.702819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.703053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.703090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.703297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.703326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.703558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.703584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.703817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.764 [2024-07-27 02:32:37.703846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.764 qpair failed and we were unable to recover it. 00:33:09.764 [2024-07-27 02:32:37.704086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.704129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.704294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.704320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.704521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.704550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.704755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.704782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.704957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.704983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.705254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.705284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.705474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.705503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.705676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.705702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.705878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.705904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.706105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.706135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.706342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.706381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.706532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.706558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.706708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.706734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.706904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.706931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.707146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.707176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.707339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.707380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.707574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.707601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.707803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.707832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.708092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.708119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.708301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.708327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.708553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.708582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.708809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.708835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.709038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.709076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.709258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.709287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.709487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.709527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.709725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.709752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.709935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.709963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.710157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.710187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.710389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.710416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.710679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.710708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.710893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.710922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.711117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.711144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.711304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.711331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.711517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.711543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.711752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.711778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.711980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.765 [2024-07-27 02:32:37.712009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.765 qpair failed and we were unable to recover it. 00:33:09.765 [2024-07-27 02:32:37.712245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.712272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.712477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.712504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.712702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.712731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.712903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.712932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.713132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.713159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.713370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.713403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.713675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.713704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.713925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.713952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.714180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.766 [2024-07-27 02:32:37.714210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.714422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.714448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.714601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.714628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.714802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.714832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.714999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.715028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.715218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.715244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.715432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.715458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.715637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.715663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.715840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.715866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.716069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.716097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.716278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.716307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.716510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.716694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.716723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.716919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.716948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.717145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.717172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.717368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.717397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.717595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.717622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.717785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.717813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.717964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.717992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.718167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.718194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.718353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.718380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.718558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.718584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 [2024-07-27 02:32:37.718586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no suppoqpair failed and we were unable to recover it. 00:33:09.766 rt for it in SPDK. Enabled only for validation. 00:33:09.766 [2024-07-27 02:32:37.718764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.718790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.718945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.718971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.719159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.719186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.719368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.719394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.719567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.766 [2024-07-27 02:32:37.719594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.766 qpair failed and we were unable to recover it. 00:33:09.766 [2024-07-27 02:32:37.719796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.719822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.720002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.720029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.720301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.720328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.720522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.720548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.720721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.720748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.720931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.720957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.721136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.721164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.721320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.721358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.721511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.721537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.721681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.721707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.721861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.721887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.722094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.722121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.722302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.722328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.722484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.722511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.722764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.722791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.723043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.723075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.723253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.723280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.723459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.723486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.723624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.723804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.723830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.724004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.724031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.724225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.724252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.724433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.724460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.724664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.724691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.724879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.724905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.725078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.725105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.725251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.725278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.725450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.725476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.725676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.725703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.725874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.725900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.726042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.726084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.726254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.726280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.726463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.726490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.726664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.726690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.726867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.726895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.727055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.767 [2024-07-27 02:32:37.727088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.767 qpair failed and we were unable to recover it. 00:33:09.767 [2024-07-27 02:32:37.727259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.727286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.727474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.727507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.727721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.727748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.727915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.727941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.728117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.728144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.728321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.728358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.728538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.728565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.728768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.728795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.729026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.729457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.729653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.729838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.729981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.730007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.730197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.730224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.730398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.730425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.730607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.730633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.730829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.730856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.731072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.731248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.731456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.731663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.731828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.731984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.732011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.732190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.732216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.732391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.732417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.732591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.732618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.732799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.732825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.732975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.733005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.733155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.733182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.768 qpair failed and we were unable to recover it. 00:33:09.768 [2024-07-27 02:32:37.733333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.768 [2024-07-27 02:32:37.733359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.733534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.733561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.733734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.733761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.733915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.733942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.734117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.734143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.734322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.734348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.734528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.734555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.734728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.734955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.734981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.735187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.735214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.735360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.735386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.735563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.735590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.735776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.735802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.735971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.735998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.736183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.736210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.736388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.736414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.736584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.736610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.736784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.736811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.736987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.737201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.737380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.737580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.737780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.737958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.737984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.738142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.738171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.738349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.738376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.738580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.738606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.738764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.738790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.738994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.739021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.739225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.739252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.739411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.739439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.739617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.739643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.739783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.739809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.739992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.740019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.740216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.740242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.740408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.740434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.769 qpair failed and we were unable to recover it. 00:33:09.769 [2024-07-27 02:32:37.740589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.769 [2024-07-27 02:32:37.740616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.740792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.740818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.740989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.741015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.741243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.741285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.741499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.741528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.741705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.741733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.741884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.741913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.742121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.742149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.742330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.742357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.742530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.742557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.742711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.742738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.742952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.742978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.743147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.743189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.743371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.743398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.743578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.743605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.743781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.743808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.743979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.744005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.744204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.744231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.744489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.744515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.744664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.744832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.744859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.745044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.745078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.745255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.745465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.745491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.745671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.745699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.745881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.745907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.746111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.746139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.746292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.746318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.746527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.746554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.746733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.746760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.746928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.746954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.747136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.747163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.747343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.747370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.747549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.747576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.747752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.747779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.770 [2024-07-27 02:32:37.747955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.770 [2024-07-27 02:32:37.747982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.770 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.748146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.748186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.748368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.748396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.748552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.748731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.771 [2024-07-27 02:32:37.748783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.748810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.748985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.749011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.749173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.749202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.749359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.749385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.749587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.749627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.749817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.749846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.750025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.750053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.750239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.750265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.750442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.750469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.750643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.750670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.750899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.751085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.751113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.751295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.751326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.751520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.751736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.751763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.751965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.751991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.752171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.752199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.752406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.752437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.752585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.752611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.752789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.752817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.752994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.753021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbd4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.753289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.753320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.753558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.753585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.753732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.753758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.753934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.753961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.754136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.754163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.754363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.754389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.754550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.754754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.754782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.754927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.754953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.755133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.755160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.771 [2024-07-27 02:32:37.755321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.771 [2024-07-27 02:32:37.755347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.771 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.755556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.755583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.755788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.755814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.755994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.756214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.756396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.756597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.756769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.756972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.756998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.757183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.757391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.757417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.757568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.757596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.757775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.757801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbe4000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.758085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.758135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.758347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.758375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.758558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.758585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.758808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.758836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.759021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.759048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbdc000b90 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.759260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.759301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.759492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.759520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.759666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.759692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.759878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.759904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.760084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.760111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.760294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.760469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.760495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.760644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.760671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.760849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.760876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.761097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.761125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.761282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.761308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.761479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.761505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.761660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.761686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.761867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.761894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.762056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.762096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.762302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.772 [2024-07-27 02:32:37.762329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.772 qpair failed and we were unable to recover it. 00:33:09.772 [2024-07-27 02:32:37.762534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.762748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.762774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.762976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.763151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.763355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.763563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.763793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.763972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.763999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.764156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.764325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.764352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.764531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.764558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.764731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.764758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.764930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.764957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.765130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.765157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.765332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.765359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.765532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.765559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.765716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.765743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.765928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.765955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.766108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.766136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.766309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.766337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.766547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.766575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.766749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.766776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.767032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.767075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.767248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.767275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.767418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.767446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.767620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.767648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.767855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.767882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.768038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.768076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.768230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.768259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.768436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.768463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.768633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.768661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.768863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.768890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.769048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.769083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.773 [2024-07-27 02:32:37.769263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.773 [2024-07-27 02:32:37.769294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.773 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.769480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.769507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.769714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.769742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.769946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.769973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.770151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.770179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.770366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.770584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.770612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.770815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.770842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.771015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.771042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.771193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.771221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.771425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.771451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.771633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.771661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.771813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.771840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.772016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.772043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.772212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.772241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.772424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.772452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.772631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.772658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.772834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.772861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.773037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.773070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.773256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.773283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.773465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.773492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.773640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.773667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.773850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.774047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.774080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.774260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.774287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.774431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.774458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.774609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.774636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.774822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.774854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.775029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.775056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.775317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.775345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.775571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.775598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.775779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.775805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.775995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.776022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.776217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.776246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.776407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.776434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.776610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.776638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.774 [2024-07-27 02:32:37.776810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.774 [2024-07-27 02:32:37.776838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.774 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.777013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.777041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.777232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.777259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.777444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.777471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.777658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.777685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.777848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.777876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.778018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.778046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.778225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.778252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.778428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.778454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.778633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.778804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.778830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.779014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.779041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.779223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.779250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.779430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.779457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.779628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.779654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.779832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.779859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.780030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.780056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.780273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.780300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.780444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.780472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.780682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.780710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.780915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.780943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.781130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.781158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.781339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.781366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.781541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.781567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.781743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.781770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.781973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.782000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.782172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.782199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.782383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.782410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.782590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.782617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.782817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.782845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.783972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.783999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.775 [2024-07-27 02:32:37.784158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.775 [2024-07-27 02:32:37.784186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.775 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.784389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.784416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.784591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.784618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.784821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.784849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.784992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.785019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.785203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.785230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.785389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.785416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.785595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.785621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.785823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.785850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.786020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.786047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.786260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.786288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.786461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.786488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.786664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.786691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.786871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.786898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.787076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.787103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.787248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.787276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.787433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.787461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.787609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.787637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.787892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.787918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.788092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.788321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.788348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.788521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.788548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.788721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.788924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.788956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.789111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.789140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.789291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.789319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.789498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.789524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.789777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.789804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.790062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.790090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.790268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.790295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.790437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.790464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.790621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.790648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.790824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.790851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.791055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.791089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.791234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.791261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.791435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.791462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.776 qpair failed and we were unable to recover it. 00:33:09.776 [2024-07-27 02:32:37.791640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.776 [2024-07-27 02:32:37.791668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.791874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.791902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.792087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.792116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.792264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.792290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.792464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.792491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.792664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.792690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.792833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.792860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.793013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.793041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.793238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.793265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.793461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.793488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.793669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.793695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.793981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.794011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.794216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.794243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.794393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.794419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.794575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.794606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.794824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.795031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.795063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.795291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.795317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.795468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.795495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.795668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.795695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.795841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.795867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.796021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.796050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.796241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.796268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.796451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.796479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.796627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.796653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.796856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.796883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.797160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.797194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.797476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.797502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.797689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.797716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.797895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.797921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.798109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.798138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.777 [2024-07-27 02:32:37.798295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.777 [2024-07-27 02:32:37.798321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.777 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.798525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.798552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.798696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.798722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.798872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.798899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.799081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.799108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.799291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.799318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.799475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.799501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.799703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.799729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.799933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.799959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.800136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.800163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.800334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.800364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.800542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.800569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.800744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.800771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.800954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.800980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.801195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.801223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.801392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.801419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.801573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.801600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.801779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.801806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.802009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.802036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.802217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.802244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.802459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.802486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.802666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.802693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.802874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.802901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.803049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.803082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.803252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.803279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.803456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.803483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.803634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.803661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.803945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.803978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.804202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.804229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.804408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.804435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.804617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.804643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.804825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.804852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.805035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.805084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.805236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.805263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.805423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.805450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.805717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.805743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.778 qpair failed and we were unable to recover it. 00:33:09.778 [2024-07-27 02:32:37.805935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.778 [2024-07-27 02:32:37.805962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.806143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.806171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.806354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.806380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.806563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.806589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.806765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.806791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.806990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.807017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.807202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.807230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.807412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.807439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.807612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.807639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.807783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.807811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.807994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.808021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.808205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.808234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.808412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.808440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.808618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.808646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.808806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.808834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.809013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.809044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.809212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.809240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.809385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.809412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.809588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.809615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.809819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.809847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.810019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.810046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.810241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.810269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.810444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.810470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.810644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.810826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.810854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.811034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.811067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.811274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.811301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.811446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.811473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.811683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.811711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.811919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.811947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.812127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.812303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.812330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.812510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.812537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.812712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.812739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.812938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.812965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.779 qpair failed and we were unable to recover it. 00:33:09.779 [2024-07-27 02:32:37.813140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.779 [2024-07-27 02:32:37.813167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.813423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.813450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.813601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.813628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.813798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.813826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.813972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.813999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.814151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.814179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.814351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.814378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.814533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.814564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.814738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.814765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.814918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.814945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.815130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.815158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.815313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.815342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.815519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.815547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.815723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.815750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.815902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.815930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.816114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.816142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.816315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.816343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.816521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.816548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.816748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.816775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.816941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.816968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.817150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.817178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.817326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.817354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.817528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.817555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.817736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.817763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.817965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.817992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.818247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.818275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.818436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.818463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.818668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.818898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.818925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.819098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.819125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.819283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.819310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.819512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.819539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.819693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.819720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.819895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.819922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.820071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.820102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.820287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.780 [2024-07-27 02:32:37.820314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.780 qpair failed and we were unable to recover it. 00:33:09.780 [2024-07-27 02:32:37.820495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.820774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.820801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.821003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.821030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.821210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.821237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.821385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.821412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.821616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.821644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.821787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.821815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.822012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.822040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.822223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.822251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.822419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.822446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.822647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.822674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.822879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.822906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.823096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.823124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.823327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.823354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.823556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.823583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.823732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.823759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.823918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.823945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.824133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.824160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.824363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.824390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.824566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.824593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.824778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.824805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.824978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.825005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.825209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.825237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.825409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.825436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.825690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.825717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.825919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.825946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.826130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.826157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.826327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.826539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.826566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.826743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.826770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.826942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.826969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.827173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.827200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.827382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.827409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.827622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.827649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.827792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.827819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.828071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.781 [2024-07-27 02:32:37.828098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.781 qpair failed and we were unable to recover it. 00:33:09.781 [2024-07-27 02:32:37.828302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.828329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.828484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.828511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.828686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.828713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.828893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.828921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.829077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.829105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.829361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.829388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.829570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.829597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.829771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.829797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.829998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.830025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.830212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.830242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.830452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.830479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.830657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.830684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.830854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.830882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.831055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.831097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.831273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.831300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.831453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.831480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.831659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.831686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.831890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.831918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.832123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.832151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.832323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.832350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.832560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.832587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.832759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.832786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.832929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.832957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.833162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.833190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.833367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.833395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.833574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.833601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.833805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.833832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.833984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.834012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.834159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.834187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.834345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.834373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.834578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.834610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.782 qpair failed and we were unable to recover it. 00:33:09.782 [2024-07-27 02:32:37.834797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.782 [2024-07-27 02:32:37.834824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.835030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.835072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.835269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.835296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.835446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.835473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.835653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.835680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.835851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.835878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.836018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.836044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.836206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.836233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.836487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.836515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.836689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.836716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.836890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.836916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.837113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.837287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.837313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.837477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.837503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.837687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.837714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.837888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.837914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.838103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.838130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.838364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.838391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.838563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.838589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.838731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.838757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.838906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.838933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.839109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.839283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.839456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.839600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.783 [2024-07-27 02:32:37.839623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.783 [2024-07-27 02:32:37.839654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.783 [2024-07-27 02:32:37.839671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.783 [2024-07-27 02:32:37.839682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.783 [2024-07-27 02:32:37.839739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:09.783 [2024-07-27 02:32:37.839796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.839821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.839770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:09.783 [2024-07-27 02:32:37.839822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:09.783 [2024-07-27 02:32:37.839824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:09.783 [2024-07-27 02:32:37.840000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.840025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.840202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.840229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.840424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.840451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.840607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.840634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.840799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.840826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.841082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.841110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.783 qpair failed and we were unable to recover it. 00:33:09.783 [2024-07-27 02:32:37.841276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.783 [2024-07-27 02:32:37.841303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.841480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.841506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.841708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.841735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.841880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.841907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.842052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.842083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.842250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.842276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.842435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.842462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.842643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.842824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.842850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.843027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.843054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.843248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.843274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.843449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.843475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.843642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.843669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.843862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.843888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.844069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.844096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.844285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.844312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.844459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.844485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.844623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.844650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.844831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.844863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.845073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.845254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.845452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.845618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.845832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.845983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.846178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.846341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.846526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.846709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.846907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.846933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.847113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.847140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.847301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.847327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.847644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.847670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.847884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.847911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.848056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.848100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.848308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.784 [2024-07-27 02:32:37.848334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.784 qpair failed and we were unable to recover it. 00:33:09.784 [2024-07-27 02:32:37.848487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.848514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.848666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.848693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.848853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.848879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.849025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.849052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.849233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.849260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.849416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.849442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.849595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.849622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.849798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.849824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.850028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.850247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.850421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.850619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.850793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.850971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.851159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.851367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.851589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.851756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.851932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.851958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.852127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.852154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.852300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.852326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.852530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.852817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.852843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.853017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.853043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.853301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.853328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.853504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.853531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.853710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.853738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.853887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.853914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.854057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.854088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.854277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.854488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.854515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.854690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.854716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.854885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.854911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.855076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.855103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.855274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.855300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.785 [2024-07-27 02:32:37.855472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.785 [2024-07-27 02:32:37.855499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.785 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.855685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.855716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.855873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.855900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.856049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.856093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.856272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.856298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.856448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.856476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.856662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.856689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.856857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.856884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.857127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.857154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.857323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.857350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.857524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.857550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.857705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.857730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.857885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.857911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.858068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.858094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.858296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.858322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.858464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.858491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.858632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.858659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.858840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.858866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.859079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.859255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.859448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.859644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.859842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.859995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.860023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.860208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.860235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.860379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.860406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.860576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.860602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.860775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.860801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.860981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.861162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.861339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.861524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.861712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.861906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.861932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.862115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.862142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.862293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.786 [2024-07-27 02:32:37.862319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.786 qpair failed and we were unable to recover it. 00:33:09.786 [2024-07-27 02:32:37.862469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.862496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.862668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.862694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.862869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.862895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.863036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.863067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.863210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.863237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.863462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.863489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.863646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.863676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.863839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.863866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.864021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.864048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.864210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.864236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.864400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.864426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.864592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.864618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.864797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.864825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.865955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.865982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.866171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.866199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.866351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.866378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.866552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.866578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.866766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.866794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.866968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.866995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.867173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.867200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.867372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.867398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.867557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.867583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.867723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.867750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.867933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.867964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.868156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.868183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.868343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.787 [2024-07-27 02:32:37.868369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.787 qpair failed and we were unable to recover it. 00:33:09.787 [2024-07-27 02:32:37.868513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.868539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.868720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.868746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.868951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.868982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.869160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.869187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.869369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.869586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.869612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.869757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.869783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.869973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.869999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.870179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.870206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.870358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.870384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.870538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.870564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.870703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.870729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.870977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.871155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.871340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.871556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.871764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.871951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.871978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.872139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.872166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.872341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.872367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.872515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.872541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.872804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.872830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.873010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.873036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.873240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.873267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.873411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.873438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.873640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.873666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.873859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.873886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.874093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.874120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.874269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.874295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.874469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.874501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.874663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.874689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.874845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.874872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.875046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.875077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.875261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.875471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.875497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.788 [2024-07-27 02:32:37.875638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.788 [2024-07-27 02:32:37.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.788 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.875819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.875846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.875999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.876025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.876206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.876233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.876383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.876410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.876571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.876597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.876768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.876795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.876975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.877184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.877355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.877534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.877764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.877943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.877969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.878151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.878178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.878343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.878370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.878548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.878574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.878759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.878785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.878935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.878961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.879111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.879137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.879321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.879348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.879492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.879518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.879694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.879720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.879878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.879905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.880159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.880185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.880367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.880393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.880541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.880567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.880747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.880774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.880980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.881006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.881164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.881191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.881349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.881376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.881548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.881574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.881775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.881801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.881982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.882009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.882159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.882186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.882359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.882385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.882541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.882567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.789 [2024-07-27 02:32:37.882730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.789 [2024-07-27 02:32:37.882757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.789 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.883023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.883050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.883238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.883264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.883430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.883456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.883616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.883651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.883821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.883847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.884049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.884081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.884247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.884273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.884425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.884625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.884651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.884870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.884896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.885077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.885108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.885252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.885278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.885510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.885536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.885710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.885737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.885882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.885909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.886972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.886998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.887198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.887226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.887397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.887424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.887569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.887595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.887764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.887791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.888040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.888076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.888260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.888287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:09.790 [2024-07-27 02:32:37.888439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.790 [2024-07-27 02:32:37.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:09.790 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.888635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.888662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.888833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.888861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.889040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.889088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.889249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.889278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.889448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.889475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.889614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.889641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.889822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.889849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.890028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.890251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.890278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.890539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.890565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.890711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.890737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.890909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.890936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.891091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.891118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.051 qpair failed and we were unable to recover it. 00:33:10.051 [2024-07-27 02:32:37.891280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.051 [2024-07-27 02:32:37.891307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.891473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.891499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.891655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.891682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.891858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.891885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.892072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.892099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.892253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.892281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.892450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.892476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.892637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.892663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.892816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.892845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.893956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.893982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.894177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.894203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.894360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.894386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.894554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.894580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.894771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.894797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.894952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.894978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.895142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.895169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.895358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.895384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.895537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.895564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.895722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.895750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.895890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.895917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.896089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.896116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.896342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.896369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.896555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.896582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.896732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.896758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.896933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.896959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.897111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.897138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.897281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.897307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.897458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.897484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.897662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.897688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.897863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.897890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.898044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.052 [2024-07-27 02:32:37.898075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.052 qpair failed and we were unable to recover it. 00:33:10.052 [2024-07-27 02:32:37.898225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.898252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.898424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.898451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.898618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.898648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.898819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.898845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.899034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.899067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.899259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.899286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.899424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.899450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.899635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.899661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.899831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.899858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.900033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.900066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.900246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.900273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.900427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.900454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.900626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.900653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.900818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.900844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.901067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.901240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.901421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.901622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.901831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.901977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.902188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.902377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.902562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.902730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.902898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.903084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.903111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.903271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.903298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.903506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.903532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.903675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.903845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.903871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.904049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.904080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.904253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.904279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.904427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.904454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.904627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.904653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.904794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.904820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.053 qpair failed and we were unable to recover it. 00:33:10.053 [2024-07-27 02:32:37.905071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.053 [2024-07-27 02:32:37.905098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.905261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.905288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.905443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.905470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.905631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.905658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.905938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.906136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.906163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.906312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.906338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.906489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.906515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.906707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.906741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.906923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.906950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.907096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.907123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.907274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.907300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.907451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.907478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.907626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.907652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.907794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.907820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.908026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.908249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.908484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.908656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.908818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.908975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.909152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.909370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.909548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.909780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.909948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.909975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.910233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.910260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.910420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.910446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.910589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.910615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.910856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.910882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.911963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.054 [2024-07-27 02:32:37.911994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.054 qpair failed and we were unable to recover it. 00:33:10.054 [2024-07-27 02:32:37.912145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.912172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.912390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.912417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.912590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.912616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.912864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.912891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.913071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.913098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.913273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.913300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.913481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.913507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.913661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.913687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.913856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.913883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.914054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.914102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.914300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.914326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.914487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.914513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.914659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.914685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.914870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.914896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.915056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.915089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.915269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.915295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.915473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.915499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.915750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.915776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.915977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.916163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.916347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.916553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.916724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.916921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.916947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.917123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.917150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.917310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.917337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.917489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.917519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.917687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.917713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.917871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.917897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.918098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.918125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.918323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.918350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.918518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.918545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.918733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.918760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.918908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.918936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.919084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.919111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.055 [2024-07-27 02:32:37.919287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.055 [2024-07-27 02:32:37.919313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.055 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.919463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.919490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.919662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.919688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.919890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.919916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.920090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.920117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.920285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.920311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.920485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.920511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.920687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.920713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.920900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.920926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.921106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.921276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.921482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.921653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.921819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.921978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.922174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.922349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.922544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.922725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.922926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.922952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.923109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.923136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.923313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.056 [2024-07-27 02:32:37.923340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.056 qpair failed and we were unable to recover it. 00:33:10.056 [2024-07-27 02:32:37.923498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.923525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.923674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.923700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.923851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.923877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.924953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.924979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 [2024-07-27 02:32:37.925121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.925148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da04b0 with addr=10.0.0.2, port=4420 00:33:10.057 qpair failed and we were unable to recover it. 00:33:10.057 A controller has encountered a failure and is being reset. 00:33:10.057 [2024-07-27 02:32:37.925365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.057 [2024-07-27 02:32:37.925414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dae470 with addr=10.0.0.2, port=4420 00:33:10.057 [2024-07-27 02:32:37.925436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dae470 is same with the state(5) to be set 00:33:10.057 [2024-07-27 02:32:37.925463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dae470 (9): Bad file descriptor 00:33:10.057 [2024-07-27 02:32:37.925483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.057 [2024-07-27 02:32:37.925498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.057 [2024-07-27 02:32:37.925514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.057 Unable to reset the controller. 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 Malloc0 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 [2024-07-27 02:32:38.006214] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 [2024-07-27 02:32:38.034474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.057 02:32:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1187936 00:33:10.992 Controller properly reset. 00:33:16.266 Initializing NVMe Controllers 00:33:16.266 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:16.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:16.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:16.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:16.266 Initialization complete. Launching workers. 00:33:16.266 Starting thread on core 1 00:33:16.266 Starting thread on core 2 00:33:16.266 Starting thread on core 3 00:33:16.266 Starting thread on core 0 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:16.266 00:33:16.266 real 0m10.673s 00:33:16.266 user 0m32.154s 00:33:16.266 sys 0m8.113s 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:16.266 ************************************ 00:33:16.266 END TEST nvmf_target_disconnect_tc2 00:33:16.266 ************************************ 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:16.266 rmmod nvme_tcp 00:33:16.266 rmmod nvme_fabrics 00:33:16.266 rmmod nvme_keyring 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1188459 ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1188459 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1188459 ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1188459 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1188459 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1188459' 00:33:16.266 killing process with pid 1188459 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1188459 00:33:16.266 02:32:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1188459 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.266 02:32:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:18.170 00:33:18.170 real 0m15.216s 00:33:18.170 user 0m57.354s 00:33:18.170 sys 0m10.458s 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:18.170 ************************************ 00:33:18.170 END TEST nvmf_target_disconnect 00:33:18.170 ************************************ 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:18.170 00:33:18.170 real 6m30.617s 00:33:18.170 user 17m1.541s 00:33:18.170 sys 1m27.095s 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.170 02:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.170 ************************************ 00:33:18.170 END TEST nvmf_host 00:33:18.170 ************************************ 00:33:18.170 00:33:18.170 real 27m8.876s 00:33:18.170 user 74m5.204s 00:33:18.170 sys 6m29.672s 00:33:18.170 02:32:46 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.170 02:32:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.170 ************************************ 00:33:18.170 END TEST nvmf_tcp 00:33:18.170 ************************************ 00:33:18.429 02:32:46 -- spdk/autotest.sh@294 -- # [[ 0 -eq 0 ]] 00:33:18.429 02:32:46 -- spdk/autotest.sh@295 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:18.429 02:32:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:18.429 02:32:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.429 02:32:46 -- common/autotest_common.sh@10 -- # set +x 00:33:18.429 ************************************ 00:33:18.429 START TEST spdkcli_nvmf_tcp 00:33:18.429 ************************************ 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:18.429 * Looking for test storage... 00:33:18.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1189542 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1189542 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1189542 ']' 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.429 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.429 [2024-07-27 02:32:46.481859] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:33:18.429 [2024-07-27 02:32:46.481934] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189542 ] 00:33:18.429 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.430 [2024-07-27 02:32:46.516282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:18.430 [2024-07-27 02:32:46.544514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:18.688 [2024-07-27 02:32:46.636581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.688 [2024-07-27 02:32:46.636586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.688 02:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:18.688 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:18.688 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:18.688 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:18.688 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:18.688 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:18.688 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:18.688 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:18.688 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:18.688 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:18.688 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:18.688 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:18.688 ' 00:33:21.224 [2024-07-27 02:32:49.290617] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.617 [2024-07-27 02:32:50.531047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:25.146 [2024-07-27 02:32:52.830282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:27.044 [2024-07-27 02:32:54.796542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:28.420 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:28.420 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:28.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:28.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:28.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:28.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:28.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:28.421 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:28.421 02:32:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:28.989 02:32:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:28.989 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:28.989 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:28.989 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:28.989 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:28.989 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:28.989 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:28.989 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:28.989 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:28.989 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:28.989 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:28.990 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:28.990 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:28.990 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:28.990 ' 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:34.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:34.258 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:34.258 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:34.258 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1189542 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1189542 ']' 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1189542 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1189542 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1189542' 00:33:34.258 killing process with pid 1189542 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1189542 00:33:34.258 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1189542 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1189542 ']' 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1189542 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1189542 ']' 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1189542 00:33:34.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1189542) - No such process 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1189542 is not found' 00:33:34.517 Process with pid 1189542 is not found 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:34.517 00:33:34.517 real 0m16.082s 00:33:34.517 user 0m34.132s 00:33:34.517 sys 0m0.810s 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.517 02:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:34.517 ************************************ 00:33:34.517 END TEST spdkcli_nvmf_tcp 00:33:34.517 ************************************ 00:33:34.517 02:33:02 -- spdk/autotest.sh@296 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:34.517 02:33:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:34.517 02:33:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:34.517 02:33:02 -- common/autotest_common.sh@10 -- # set +x 00:33:34.517 ************************************ 00:33:34.517 START TEST nvmf_identify_passthru 00:33:34.517 ************************************ 00:33:34.517 02:33:02 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:34.517 * Looking for test storage... 00:33:34.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.517 02:33:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.517 02:33:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.517 02:33:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.517 02:33:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.517 02:33:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:34.517 02:33:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:34.517 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:34.517 02:33:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.517 02:33:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.518 02:33:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.518 02:33:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.518 02:33:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.518 02:33:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:34.518 02:33:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.518 02:33:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.518 02:33:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:34.518 02:33:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:34.518 02:33:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:34.518 02:33:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:36.419 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:36.420 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:36.420 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:36.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:36.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:36.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:33:36.420 00:33:36.420 --- 10.0.0.2 ping statistics --- 00:33:36.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.420 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:33:36.420 00:33:36.420 --- 10.0.0.1 ping statistics --- 00:33:36.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.420 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:36.420 02:33:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:36.678 02:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:36.678 02:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:36.678 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.871 02:33:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:40.871 02:33:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:40.871 02:33:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:40.871 02:33:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:40.871 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1194771 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.119 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1194771 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1194771 ']' 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:45.119 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.119 [2024-07-27 02:33:13.129330] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:33:45.119 [2024-07-27 02:33:13.129437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.119 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.119 [2024-07-27 02:33:13.177233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:45.119 [2024-07-27 02:33:13.208670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.376 [2024-07-27 02:33:13.301568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.376 [2024-07-27 02:33:13.301651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.376 [2024-07-27 02:33:13.301668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.376 [2024-07-27 02:33:13.301682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.376 [2024-07-27 02:33:13.301702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.376 [2024-07-27 02:33:13.305082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.376 [2024-07-27 02:33:13.305132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.377 [2024-07-27 02:33:13.305227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.377 [2024-07-27 02:33:13.305224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:45.377 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.377 INFO: Log level set to 20 00:33:45.377 INFO: Requests: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "method": "nvmf_set_config", 00:33:45.377 "id": 1, 00:33:45.377 "params": { 00:33:45.377 "admin_cmd_passthru": { 00:33:45.377 "identify_ctrlr": true 00:33:45.377 } 00:33:45.377 } 00:33:45.377 } 00:33:45.377 00:33:45.377 INFO: response: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "id": 1, 00:33:45.377 "result": true 00:33:45.377 } 00:33:45.377 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.377 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.377 INFO: Setting log level to 20 00:33:45.377 INFO: Setting log level to 20 00:33:45.377 INFO: Log level set to 20 00:33:45.377 INFO: Log level set to 20 00:33:45.377 INFO: Requests: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "method": "framework_start_init", 00:33:45.377 "id": 1 00:33:45.377 } 00:33:45.377 00:33:45.377 INFO: Requests: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "method": "framework_start_init", 00:33:45.377 "id": 1 00:33:45.377 } 00:33:45.377 00:33:45.377 [2024-07-27 02:33:13.486259] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:45.377 INFO: response: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "id": 1, 00:33:45.377 "result": true 00:33:45.377 } 00:33:45.377 00:33:45.377 INFO: response: 00:33:45.377 { 00:33:45.377 "jsonrpc": "2.0", 00:33:45.377 "id": 1, 00:33:45.377 "result": true 00:33:45.377 } 00:33:45.377 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.377 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.377 INFO: Setting log level to 40 00:33:45.377 INFO: Setting log level to 40 00:33:45.377 INFO: Setting log level to 40 00:33:45.377 [2024-07-27 02:33:13.496229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.377 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:45.377 02:33:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.377 02:33:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.662 Nvme0n1 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.662 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.662 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.662 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.662 [2024-07-27 02:33:16.384505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.662 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.662 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.662 [ 00:33:48.662 { 00:33:48.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:48.663 "subtype": "Discovery", 00:33:48.663 "listen_addresses": [], 00:33:48.663 "allow_any_host": true, 00:33:48.663 "hosts": [] 00:33:48.663 }, 00:33:48.663 { 00:33:48.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:48.663 "subtype": "NVMe", 00:33:48.663 "listen_addresses": [ 00:33:48.663 { 00:33:48.663 "trtype": "TCP", 00:33:48.663 "adrfam": "IPv4", 00:33:48.663 "traddr": "10.0.0.2", 00:33:48.663 "trsvcid": "4420" 00:33:48.663 } 00:33:48.663 ], 00:33:48.663 "allow_any_host": true, 00:33:48.663 "hosts": [], 00:33:48.663 "serial_number": "SPDK00000000000001", 00:33:48.663 "model_number": "SPDK bdev Controller", 00:33:48.663 "max_namespaces": 1, 00:33:48.663 "min_cntlid": 1, 00:33:48.663 "max_cntlid": 65519, 00:33:48.663 "namespaces": [ 00:33:48.663 { 00:33:48.663 "nsid": 1, 00:33:48.663 "bdev_name": "Nvme0n1", 00:33:48.663 "name": "Nvme0n1", 00:33:48.663 "nguid": "FD708602E7154633B3932DCC29753487", 00:33:48.663 "uuid": "fd708602-e715-4633-b393-2dcc29753487" 00:33:48.663 } 00:33:48.663 ] 00:33:48.663 } 00:33:48.663 ] 00:33:48.663 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:48.663 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:48.663 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.663 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.663 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.663 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:48.663 02:33:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:48.663 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:48.663 rmmod nvme_tcp 00:33:48.663 rmmod nvme_fabrics 00:33:48.663 rmmod nvme_keyring 00:33:48.921 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:48.921 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:48.921 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:48.921 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1194771 ']' 00:33:48.921 02:33:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1194771 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1194771 ']' 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1194771 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1194771 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1194771' 00:33:48.921 killing process with pid 1194771 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1194771 00:33:48.921 02:33:16 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1194771 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:50.294 02:33:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.294 02:33:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:50.294 02:33:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.828 02:33:20 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.828 00:33:52.828 real 0m17.962s 00:33:52.828 user 0m26.766s 00:33:52.828 sys 0m2.294s 00:33:52.828 02:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.828 02:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.828 ************************************ 00:33:52.828 END TEST nvmf_identify_passthru 00:33:52.828 ************************************ 00:33:52.829 02:33:20 -- spdk/autotest.sh@298 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:52.829 02:33:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:52.829 02:33:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:52.829 02:33:20 -- common/autotest_common.sh@10 -- # set +x 00:33:52.829 ************************************ 00:33:52.829 START TEST nvmf_dif 00:33:52.829 ************************************ 00:33:52.829 02:33:20 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:52.829 * Looking for test storage... 00:33:52.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.829 02:33:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.829 02:33:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.829 02:33:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.829 02:33:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.829 02:33:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.829 02:33:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.829 02:33:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:52.829 02:33:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:52.829 02:33:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.829 02:33:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:52.829 02:33:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:52.829 02:33:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:52.829 02:33:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:54.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:54.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.730 02:33:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:54.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:54.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:33:54.731 00:33:54.731 --- 10.0.0.2 ping statistics --- 00:33:54.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.731 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:33:54.731 00:33:54.731 --- 10.0.0.1 ping statistics --- 00:33:54.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.731 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:54.731 02:33:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:55.677 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:55.677 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:55.677 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:55.677 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:55.677 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:55.677 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:55.677 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:55.677 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:55.677 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:55.677 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:55.677 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:55.677 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:55.677 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:55.677 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:55.677 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:55.677 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:55.677 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:55.677 02:33:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:55.677 02:33:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1197908 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:55.677 02:33:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1197908 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1197908 ']' 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:55.677 02:33:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:55.677 [2024-07-27 02:33:23.800746] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:33:55.677 [2024-07-27 02:33:23.800824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.677 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.933 [2024-07-27 02:33:23.838564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:55.933 [2024-07-27 02:33:23.864916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.933 [2024-07-27 02:33:23.948727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.933 [2024-07-27 02:33:23.948778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.934 [2024-07-27 02:33:23.948802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.934 [2024-07-27 02:33:23.948813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.934 [2024-07-27 02:33:23.948823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.934 [2024-07-27 02:33:23.948847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:33:55.934 02:33:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:55.934 02:33:24 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.934 02:33:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:55.934 02:33:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:55.934 [2024-07-27 02:33:24.077651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.934 02:33:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:55.934 02:33:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:56.191 ************************************ 00:33:56.191 START TEST fio_dif_1_default 00:33:56.191 ************************************ 00:33:56.191 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:56.192 bdev_null0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:56.192 [2024-07-27 02:33:24.133915] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:56.192 { 00:33:56.192 "params": { 00:33:56.192 "name": "Nvme$subsystem", 00:33:56.192 "trtype": "$TEST_TRANSPORT", 00:33:56.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.192 "adrfam": "ipv4", 00:33:56.192 "trsvcid": "$NVMF_PORT", 00:33:56.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.192 "hdgst": ${hdgst:-false}, 00:33:56.192 "ddgst": ${ddgst:-false} 00:33:56.192 }, 00:33:56.192 "method": "bdev_nvme_attach_controller" 00:33:56.192 } 00:33:56.192 EOF 00:33:56.192 )") 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:56.192 "params": { 00:33:56.192 "name": "Nvme0", 00:33:56.192 "trtype": "tcp", 00:33:56.192 "traddr": "10.0.0.2", 00:33:56.192 "adrfam": "ipv4", 00:33:56.192 "trsvcid": "4420", 00:33:56.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.192 "hdgst": false, 00:33:56.192 "ddgst": false 00:33:56.192 }, 00:33:56.192 "method": "bdev_nvme_attach_controller" 00:33:56.192 }' 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.192 02:33:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.450 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:56.450 fio-3.35 00:33:56.450 Starting 1 thread 00:33:56.450 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.660 00:34:08.660 filename0: (groupid=0, jobs=1): err= 0: pid=1198134: Sat Jul 27 02:33:35 2024 00:34:08.660 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10011msec) 00:34:08.660 slat (nsec): min=4703, max=42644, avg=9390.17, stdev=2888.65 00:34:08.660 clat (usec): min=40902, max=44035, avg=41506.30, stdev=546.34 00:34:08.660 lat (usec): min=40910, max=44051, avg=41515.69, stdev=546.44 00:34:08.660 clat percentiles (usec): 00:34:08.660 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:08.660 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:34:08.660 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:08.660 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:34:08.660 | 99.99th=[43779] 00:34:08.660 bw ( KiB/s): min= 352, max= 416, per=99.69%, avg=384.00, stdev=10.38, samples=20 00:34:08.660 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:34:08.660 lat (msec) : 50=100.00% 00:34:08.660 cpu : usr=89.41%, sys=10.31%, ctx=14, majf=0, minf=234 00:34:08.660 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.660 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.660 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:08.660 00:34:08.660 Run status group 0 (all jobs): 00:34:08.660 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10011-10011msec 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 00:34:08.660 real 0m11.165s 00:34:08.660 user 0m10.265s 00:34:08.660 sys 0m1.314s 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 ************************************ 00:34:08.660 END TEST fio_dif_1_default 00:34:08.660 ************************************ 00:34:08.660 02:33:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:08.660 02:33:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:08.660 02:33:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 ************************************ 00:34:08.660 START TEST fio_dif_1_multi_subsystems 00:34:08.660 ************************************ 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 bdev_null0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 [2024-07-27 02:33:35.347592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 bdev_null1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:08.660 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:08.661 { 00:34:08.661 "params": { 00:34:08.661 "name": "Nvme$subsystem", 00:34:08.661 "trtype": "$TEST_TRANSPORT", 00:34:08.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.661 "adrfam": "ipv4", 00:34:08.661 "trsvcid": "$NVMF_PORT", 00:34:08.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.661 "hdgst": ${hdgst:-false}, 00:34:08.661 "ddgst": ${ddgst:-false} 00:34:08.661 }, 00:34:08.661 "method": "bdev_nvme_attach_controller" 00:34:08.661 } 00:34:08.661 EOF 00:34:08.661 )") 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:08.661 { 00:34:08.661 "params": { 00:34:08.661 "name": "Nvme$subsystem", 00:34:08.661 "trtype": "$TEST_TRANSPORT", 00:34:08.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.661 "adrfam": "ipv4", 00:34:08.661 "trsvcid": "$NVMF_PORT", 00:34:08.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.661 "hdgst": ${hdgst:-false}, 00:34:08.661 "ddgst": ${ddgst:-false} 00:34:08.661 }, 00:34:08.661 "method": "bdev_nvme_attach_controller" 00:34:08.661 } 00:34:08.661 EOF 00:34:08.661 )") 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:08.661 "params": { 00:34:08.661 "name": "Nvme0", 00:34:08.661 "trtype": "tcp", 00:34:08.661 "traddr": "10.0.0.2", 00:34:08.661 "adrfam": "ipv4", 00:34:08.661 "trsvcid": "4420", 00:34:08.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.661 "hdgst": false, 00:34:08.661 "ddgst": false 00:34:08.661 }, 00:34:08.661 "method": "bdev_nvme_attach_controller" 00:34:08.661 },{ 00:34:08.661 "params": { 00:34:08.661 "name": "Nvme1", 00:34:08.661 "trtype": "tcp", 00:34:08.661 "traddr": "10.0.0.2", 00:34:08.661 "adrfam": "ipv4", 00:34:08.661 "trsvcid": "4420", 00:34:08.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.661 "hdgst": false, 00:34:08.661 "ddgst": false 00:34:08.661 }, 00:34:08.661 "method": "bdev_nvme_attach_controller" 00:34:08.661 }' 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.661 02:33:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.661 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:08.661 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:08.661 fio-3.35 00:34:08.661 Starting 2 threads 00:34:08.661 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.661 00:34:18.661 filename0: (groupid=0, jobs=1): err= 0: pid=1199532: Sat Jul 27 02:33:46 2024 00:34:18.661 read: IOPS=188, BW=753KiB/s (772kB/s)(7552KiB/10023msec) 00:34:18.661 slat (nsec): min=5065, max=30958, avg=10318.99, stdev=4287.73 00:34:18.661 clat (usec): min=819, max=44348, avg=21200.54, stdev=20199.98 00:34:18.661 lat (usec): min=845, max=44379, avg=21210.86, stdev=20199.40 00:34:18.661 clat percentiles (usec): 00:34:18.661 | 1.00th=[ 857], 5.00th=[ 873], 10.00th=[ 881], 20.00th=[ 906], 00:34:18.661 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[41157], 60.00th=[41157], 00:34:18.661 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:34:18.661 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:34:18.661 | 99.99th=[44303] 00:34:18.661 bw ( KiB/s): min= 672, max= 768, per=51.16%, avg=753.60, stdev=28.39, samples=20 00:34:18.661 iops : min= 168, max= 192, avg=188.40, stdev= 7.10, samples=20 00:34:18.661 lat (usec) : 1000=47.25% 00:34:18.661 lat (msec) : 2=2.54%, 50=50.21% 00:34:18.661 cpu : usr=93.42%, sys=6.13%, ctx=53, majf=0, minf=136 00:34:18.661 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.661 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.661 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:18.661 filename1: (groupid=0, jobs=1): err= 0: pid=1199533: Sat Jul 27 02:33:46 2024 00:34:18.661 read: IOPS=179, BW=719KiB/s (736kB/s)(7200KiB/10017msec) 00:34:18.661 slat (nsec): min=4488, max=73016, avg=10159.47, stdev=4840.57 00:34:18.661 clat (usec): min=804, max=46004, avg=22225.67, stdev=20600.53 00:34:18.661 lat (usec): min=812, max=46016, avg=22235.83, stdev=20600.85 00:34:18.661 clat percentiles (usec): 00:34:18.661 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 988], 00:34:18.661 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[41157], 60.00th=[41157], 00:34:18.661 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:34:18.661 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:34:18.661 | 99.99th=[45876] 00:34:18.661 bw ( KiB/s): min= 608, max= 768, per=48.78%, avg=718.40, stdev=44.63, samples=20 00:34:18.661 iops : min= 152, max= 192, avg=179.60, stdev=11.16, samples=20 00:34:18.661 lat (usec) : 1000=20.56% 00:34:18.661 lat (msec) : 2=27.89%, 50=51.56% 00:34:18.661 cpu : usr=93.03%, sys=6.00%, ctx=29, majf=0, minf=170 00:34:18.661 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.661 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.661 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:18.661 00:34:18.661 Run status group 0 (all jobs): 00:34:18.661 READ: bw=1472KiB/s (1507kB/s), 719KiB/s-753KiB/s (736kB/s-772kB/s), io=14.4MiB (15.1MB), run=10017-10023msec 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.661 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 00:34:18.662 real 0m11.392s 00:34:18.662 user 0m20.069s 00:34:18.662 sys 0m1.549s 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 ************************************ 00:34:18.662 END TEST fio_dif_1_multi_subsystems 00:34:18.662 ************************************ 00:34:18.662 02:33:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:18.662 02:33:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:18.662 02:33:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 ************************************ 00:34:18.662 START TEST fio_dif_rand_params 00:34:18.662 ************************************ 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 bdev_null0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.662 [2024-07-27 02:33:46.791209] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:18.662 { 00:34:18.662 "params": { 00:34:18.662 "name": "Nvme$subsystem", 00:34:18.662 "trtype": "$TEST_TRANSPORT", 00:34:18.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.662 "adrfam": "ipv4", 00:34:18.662 "trsvcid": "$NVMF_PORT", 00:34:18.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.662 "hdgst": ${hdgst:-false}, 00:34:18.662 "ddgst": ${ddgst:-false} 00:34:18.662 }, 00:34:18.662 "method": "bdev_nvme_attach_controller" 00:34:18.662 } 00:34:18.662 EOF 00:34:18.662 )") 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:18.662 "params": { 00:34:18.662 "name": "Nvme0", 00:34:18.662 "trtype": "tcp", 00:34:18.662 "traddr": "10.0.0.2", 00:34:18.662 "adrfam": "ipv4", 00:34:18.662 "trsvcid": "4420", 00:34:18.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.662 "hdgst": false, 00:34:18.662 "ddgst": false 00:34:18.662 }, 00:34:18.662 "method": "bdev_nvme_attach_controller" 00:34:18.662 }' 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:18.662 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:18.920 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:18.920 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:18.920 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.920 02:33:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.920 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:18.920 ... 00:34:18.920 fio-3.35 00:34:18.920 Starting 3 threads 00:34:19.178 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.736 00:34:25.736 filename0: (groupid=0, jobs=1): err= 0: pid=1200930: Sat Jul 27 02:33:52 2024 00:34:25.736 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(131MiB/5008msec) 00:34:25.736 slat (nsec): min=5241, max=75927, avg=13813.29, stdev=5473.41 00:34:25.736 clat (usec): min=5223, max=91539, avg=14315.58, stdev=14013.95 00:34:25.736 lat (usec): min=5235, max=91559, avg=14329.39, stdev=14013.99 00:34:25.736 clat percentiles (usec): 00:34:25.736 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7242], 00:34:25.736 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10683], 00:34:25.736 | 70.00th=[11731], 80.00th=[12780], 90.00th=[49021], 95.00th=[52167], 00:34:25.736 | 99.00th=[54789], 99.50th=[55313], 99.90th=[89654], 99.95th=[91751], 00:34:25.736 | 99.99th=[91751] 00:34:25.736 bw ( KiB/s): min=16896, max=46336, per=36.54%, avg=26752.00, stdev=9479.49, samples=10 00:34:25.736 iops : min= 132, max= 362, avg=209.00, stdev=74.06, samples=10 00:34:25.736 lat (msec) : 10=54.77%, 20=33.78%, 50=3.34%, 100=8.11% 00:34:25.736 cpu : usr=90.99%, sys=8.23%, ctx=101, majf=0, minf=70 00:34:25.736 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 issued rwts: total=1048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.737 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:25.737 filename0: (groupid=0, jobs=1): err= 0: pid=1200931: Sat Jul 27 02:33:52 2024 00:34:25.737 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(103MiB/5007msec) 00:34:25.737 slat (nsec): min=4999, max=55025, avg=13529.52, stdev=4590.66 00:34:25.737 clat (usec): min=5542, max=93936, avg=18203.78, stdev=16037.21 00:34:25.737 lat (usec): min=5554, max=93954, avg=18217.31, stdev=16037.11 00:34:25.737 clat percentiles (usec): 00:34:25.737 | 1.00th=[ 5866], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8848], 00:34:25.737 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[12387], 60.00th=[13566], 00:34:25.737 | 70.00th=[14877], 80.00th=[16909], 90.00th=[52167], 95.00th=[54264], 00:34:25.737 | 99.00th=[56361], 99.50th=[57934], 99.90th=[93848], 99.95th=[93848], 00:34:25.737 | 99.99th=[93848] 00:34:25.737 bw ( KiB/s): min=13312, max=29184, per=28.70%, avg=21017.60, stdev=4548.27, samples=10 00:34:25.737 iops : min= 104, max= 228, avg=164.20, stdev=35.53, samples=10 00:34:25.737 lat (msec) : 10=36.41%, 20=46.72%, 50=2.55%, 100=14.32% 00:34:25.737 cpu : usr=91.43%, sys=8.11%, ctx=6, majf=0, minf=134 00:34:25.737 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.737 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:25.737 filename0: (groupid=0, jobs=1): err= 0: pid=1200932: Sat Jul 27 02:33:52 2024 00:34:25.737 read: IOPS=201, BW=25.1MiB/s (26.4MB/s)(127MiB/5047msec) 00:34:25.737 slat (nsec): min=5169, max=41709, avg=12912.08, stdev=3740.80 00:34:25.737 clat (usec): min=5698, max=92347, avg=14856.21, stdev=14265.81 00:34:25.737 lat (usec): min=5710, max=92366, avg=14869.12, stdev=14266.04 00:34:25.737 clat percentiles (usec): 00:34:25.737 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 8356], 00:34:25.737 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11207], 00:34:25.737 | 70.00th=[12125], 80.00th=[13042], 90.00th=[49546], 95.00th=[51643], 00:34:25.737 | 99.00th=[54264], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:34:25.737 | 99.99th=[92799] 00:34:25.737 bw ( KiB/s): min=11776, max=36352, per=35.39%, avg=25912.50, stdev=7162.52, samples=10 00:34:25.737 iops : min= 92, max= 284, avg=202.40, stdev=55.95, samples=10 00:34:25.737 lat (msec) : 10=48.47%, 20=40.49%, 50=2.17%, 100=8.87% 00:34:25.737 cpu : usr=90.88%, sys=8.60%, ctx=12, majf=0, minf=91 00:34:25.737 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.737 issued rwts: total=1015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.737 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:25.737 00:34:25.737 Run status group 0 (all jobs): 00:34:25.737 READ: bw=71.5MiB/s (75.0MB/s), 20.6MiB/s-26.2MiB/s (21.6MB/s-27.4MB/s), io=361MiB (378MB), run=5007-5047msec 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 bdev_null0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 [2024-07-27 02:33:53.006636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 bdev_null1 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 bdev_null2 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.737 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:25.738 { 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme$subsystem", 00:34:25.738 "trtype": "$TEST_TRANSPORT", 00:34:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "$NVMF_PORT", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.738 "hdgst": ${hdgst:-false}, 00:34:25.738 "ddgst": ${ddgst:-false} 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 } 00:34:25.738 EOF 00:34:25.738 )") 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:25.738 { 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme$subsystem", 00:34:25.738 "trtype": "$TEST_TRANSPORT", 00:34:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "$NVMF_PORT", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.738 "hdgst": ${hdgst:-false}, 00:34:25.738 "ddgst": ${ddgst:-false} 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 } 00:34:25.738 EOF 00:34:25.738 )") 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:25.738 { 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme$subsystem", 00:34:25.738 "trtype": "$TEST_TRANSPORT", 00:34:25.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "$NVMF_PORT", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.738 "hdgst": ${hdgst:-false}, 00:34:25.738 "ddgst": ${ddgst:-false} 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 } 00:34:25.738 EOF 00:34:25.738 )") 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme0", 00:34:25.738 "trtype": "tcp", 00:34:25.738 "traddr": "10.0.0.2", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "4420", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:25.738 "hdgst": false, 00:34:25.738 "ddgst": false 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 },{ 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme1", 00:34:25.738 "trtype": "tcp", 00:34:25.738 "traddr": "10.0.0.2", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "4420", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.738 "hdgst": false, 00:34:25.738 "ddgst": false 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 },{ 00:34:25.738 "params": { 00:34:25.738 "name": "Nvme2", 00:34:25.738 "trtype": "tcp", 00:34:25.738 "traddr": "10.0.0.2", 00:34:25.738 "adrfam": "ipv4", 00:34:25.738 "trsvcid": "4420", 00:34:25.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:25.738 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:25.738 "hdgst": false, 00:34:25.738 "ddgst": false 00:34:25.738 }, 00:34:25.738 "method": "bdev_nvme_attach_controller" 00:34:25.738 }' 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:25.738 02:33:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.738 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:25.738 ... 00:34:25.738 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:25.738 ... 00:34:25.738 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:25.738 ... 00:34:25.738 fio-3.35 00:34:25.738 Starting 24 threads 00:34:25.738 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.937 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201792: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=67, BW=272KiB/s (278kB/s)(2752KiB/10127msec) 00:34:37.937 slat (usec): min=11, max=103, avg=37.04, stdev=16.39 00:34:37.937 clat (msec): min=90, max=397, avg=235.17, stdev=54.48 00:34:37.937 lat (msec): min=90, max=397, avg=235.21, stdev=54.48 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 91], 5.00th=[ 120], 10.00th=[ 167], 20.00th=[ 197], 00:34:37.937 | 30.00th=[ 209], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 266], 00:34:37.937 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.937 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:34:37.937 | 99.99th=[ 397] 00:34:37.937 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=268.80, stdev=68.00, samples=20 00:34:37.937 iops : min= 32, max= 96, avg=67.20, stdev=17.00, samples=20 00:34:37.937 lat (msec) : 100=2.33%, 250=51.16%, 500=46.51% 00:34:37.937 cpu : usr=98.10%, sys=1.41%, ctx=57, majf=0, minf=9 00:34:37.937 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201793: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=83, BW=335KiB/s (343kB/s)(3392KiB/10127msec) 00:34:37.937 slat (usec): min=3, max=185, avg=42.79, stdev=27.37 00:34:37.937 clat (msec): min=90, max=323, avg=190.08, stdev=29.18 00:34:37.937 lat (msec): min=90, max=323, avg=190.12, stdev=29.19 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 91], 5.00th=[ 142], 10.00th=[ 165], 20.00th=[ 178], 00:34:37.937 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 197], 00:34:37.937 | 70.00th=[ 199], 80.00th=[ 211], 90.00th=[ 220], 95.00th=[ 230], 00:34:37.937 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 326], 00:34:37.937 | 99.99th=[ 326] 00:34:37.937 bw ( KiB/s): min= 256, max= 384, per=4.89%, avg=332.80, stdev=58.18, samples=20 00:34:37.937 iops : min= 64, max= 96, avg=83.20, stdev=14.54, samples=20 00:34:37.937 lat (msec) : 100=1.89%, 250=95.99%, 500=2.12% 00:34:37.937 cpu : usr=96.67%, sys=2.04%, ctx=142, majf=0, minf=9 00:34:37.937 IO depths : 1=1.5%, 2=7.8%, 4=25.0%, 8=54.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201794: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10108msec) 00:34:37.937 slat (nsec): min=8063, max=88407, avg=27990.37, stdev=17842.39 00:34:37.937 clat (msec): min=128, max=389, avg=240.38, stdev=47.24 00:34:37.937 lat (msec): min=128, max=389, avg=240.41, stdev=47.23 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 142], 5.00th=[ 163], 10.00th=[ 194], 20.00th=[ 199], 00:34:37.937 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 243], 60.00th=[ 268], 00:34:37.937 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.937 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:34:37.937 | 99.99th=[ 388] 00:34:37.937 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=75.12, samples=20 00:34:37.937 iops : min= 32, max= 96, avg=65.60, stdev=18.78, samples=20 00:34:37.937 lat (msec) : 250=57.44%, 500=42.56% 00:34:37.937 cpu : usr=98.03%, sys=1.51%, ctx=54, majf=0, minf=9 00:34:37.937 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201795: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10056msec) 00:34:37.937 slat (usec): min=11, max=103, avg=36.36, stdev=17.03 00:34:37.937 clat (msec): min=195, max=310, avg=244.95, stdev=35.47 00:34:37.937 lat (msec): min=195, max=310, avg=244.99, stdev=35.46 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.937 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 266], 00:34:37.937 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.937 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:34:37.937 | 99.99th=[ 309] 00:34:37.937 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=256.00, stdev=71.93, samples=20 00:34:37.937 iops : min= 32, max= 96, avg=64.00, stdev=17.98, samples=20 00:34:37.937 lat (msec) : 250=51.22%, 500=48.78% 00:34:37.937 cpu : usr=97.39%, sys=1.79%, ctx=55, majf=0, minf=9 00:34:37.937 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201796: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10061msec) 00:34:37.937 slat (usec): min=19, max=184, avg=67.17, stdev=13.11 00:34:37.937 clat (msec): min=193, max=374, avg=244.79, stdev=37.21 00:34:37.937 lat (msec): min=193, max=374, avg=244.86, stdev=37.22 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.937 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 266], 00:34:37.937 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 296], 95.00th=[ 300], 00:34:37.937 | 99.00th=[ 309], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:34:37.937 | 99.99th=[ 376] 00:34:37.937 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=256.00, stdev=71.93, samples=20 00:34:37.937 iops : min= 32, max= 96, avg=64.00, stdev=17.98, samples=20 00:34:37.937 lat (msec) : 250=52.13%, 500=47.87% 00:34:37.937 cpu : usr=95.51%, sys=2.67%, ctx=104, majf=0, minf=9 00:34:37.937 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201797: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=82, BW=329KiB/s (337kB/s)(3312KiB/10071msec) 00:34:37.937 slat (usec): min=8, max=100, avg=24.21, stdev=20.81 00:34:37.937 clat (msec): min=76, max=323, avg=194.41, stdev=32.10 00:34:37.937 lat (msec): min=76, max=323, avg=194.43, stdev=32.11 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 77], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 178], 00:34:37.937 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 197], 00:34:37.937 | 70.00th=[ 199], 80.00th=[ 215], 90.00th=[ 228], 95.00th=[ 232], 00:34:37.937 | 99.00th=[ 292], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:34:37.937 | 99.99th=[ 326] 00:34:37.937 bw ( KiB/s): min= 256, max= 384, per=4.77%, avg=324.80, stdev=54.22, samples=20 00:34:37.937 iops : min= 64, max= 96, avg=81.20, stdev=13.56, samples=20 00:34:37.937 lat (msec) : 100=1.93%, 250=94.20%, 500=3.86% 00:34:37.937 cpu : usr=97.47%, sys=1.92%, ctx=143, majf=0, minf=9 00:34:37.937 IO depths : 1=1.9%, 2=5.3%, 4=16.3%, 8=65.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:34:37.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 complete : 0=0.0%, 4=91.7%, 8=2.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.937 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.937 filename0: (groupid=0, jobs=1): err= 0: pid=1201798: Sat Jul 27 02:34:04 2024 00:34:37.937 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10057msec) 00:34:37.937 slat (usec): min=18, max=245, avg=63.67, stdev=19.97 00:34:37.937 clat (msec): min=159, max=361, avg=244.73, stdev=36.29 00:34:37.937 lat (msec): min=159, max=361, avg=244.79, stdev=36.29 00:34:37.937 clat percentiles (msec): 00:34:37.937 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.937 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 266], 00:34:37.937 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 296], 95.00th=[ 300], 00:34:37.937 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 363], 99.95th=[ 363], 00:34:37.937 | 99.99th=[ 363] 00:34:37.938 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=256.00, stdev=71.93, samples=20 00:34:37.938 iops : min= 32, max= 96, avg=64.00, stdev=17.98, samples=20 00:34:37.938 lat (msec) : 250=51.22%, 500=48.78% 00:34:37.938 cpu : usr=97.14%, sys=1.85%, ctx=29, majf=0, minf=9 00:34:37.938 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename0: (groupid=0, jobs=1): err= 0: pid=1201799: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10115msec) 00:34:37.938 slat (usec): min=11, max=283, avg=63.73, stdev=36.28 00:34:37.938 clat (msec): min=124, max=387, avg=240.20, stdev=39.05 00:34:37.938 lat (msec): min=124, max=388, avg=240.27, stdev=39.06 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 169], 5.00th=[ 194], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.938 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 266], 00:34:37.938 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 300], 00:34:37.938 | 99.00th=[ 300], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:34:37.938 | 99.99th=[ 388] 00:34:37.938 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=65.33, samples=20 00:34:37.938 iops : min= 32, max= 96, avg=65.60, stdev=16.33, samples=20 00:34:37.938 lat (msec) : 250=59.82%, 500=40.18% 00:34:37.938 cpu : usr=96.05%, sys=2.32%, ctx=91, majf=0, minf=9 00:34:37.938 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201800: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10114msec) 00:34:37.938 slat (usec): min=11, max=104, avg=52.14, stdev=25.64 00:34:37.938 clat (msec): min=162, max=383, avg=240.33, stdev=37.45 00:34:37.938 lat (msec): min=162, max=383, avg=240.39, stdev=37.46 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 174], 5.00th=[ 194], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.938 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 266], 00:34:37.938 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 296], 00:34:37.938 | 99.00th=[ 300], 99.50th=[ 326], 99.90th=[ 384], 99.95th=[ 384], 00:34:37.938 | 99.99th=[ 384] 00:34:37.938 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=63.87, samples=20 00:34:37.938 iops : min= 32, max= 96, avg=65.60, stdev=15.97, samples=20 00:34:37.938 lat (msec) : 250=57.44%, 500=42.56% 00:34:37.938 cpu : usr=96.68%, sys=2.14%, ctx=35, majf=0, minf=9 00:34:37.938 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201801: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=84, BW=336KiB/s (344kB/s)(3400KiB/10112msec) 00:34:37.938 slat (nsec): min=6686, max=54447, avg=15078.15, stdev=6427.54 00:34:37.938 clat (msec): min=113, max=282, avg=189.54, stdev=29.87 00:34:37.938 lat (msec): min=113, max=282, avg=189.55, stdev=29.87 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 114], 5.00th=[ 127], 10.00th=[ 159], 20.00th=[ 171], 00:34:37.938 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 197], 00:34:37.938 | 70.00th=[ 199], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 241], 00:34:37.938 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:34:37.938 | 99.99th=[ 284] 00:34:37.938 bw ( KiB/s): min= 256, max= 384, per=4.90%, avg=333.60, stdev=57.64, samples=20 00:34:37.938 iops : min= 64, max= 96, avg=83.40, stdev=14.41, samples=20 00:34:37.938 lat (msec) : 250=97.41%, 500=2.59% 00:34:37.938 cpu : usr=96.90%, sys=2.13%, ctx=54, majf=0, minf=9 00:34:37.938 IO depths : 1=3.4%, 2=8.5%, 4=21.3%, 8=57.6%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201802: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10115msec) 00:34:37.938 slat (usec): min=11, max=329, avg=69.69, stdev=26.02 00:34:37.938 clat (msec): min=122, max=377, avg=240.83, stdev=42.53 00:34:37.938 lat (msec): min=122, max=377, avg=240.90, stdev=42.54 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 123], 5.00th=[ 194], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.938 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 268], 00:34:37.938 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 300], 00:34:37.938 | 99.00th=[ 305], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:34:37.938 | 99.99th=[ 376] 00:34:37.938 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=261.60, stdev=64.27, samples=20 00:34:37.938 iops : min= 32, max= 96, avg=65.40, stdev=16.07, samples=20 00:34:37.938 lat (msec) : 250=56.72%, 500=43.28% 00:34:37.938 cpu : usr=93.62%, sys=3.37%, ctx=245, majf=0, minf=9 00:34:37.938 IO depths : 1=4.0%, 2=10.3%, 4=25.1%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201803: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10106msec) 00:34:37.938 slat (usec): min=8, max=140, avg=28.84, stdev=18.28 00:34:37.938 clat (msec): min=123, max=388, avg=240.34, stdev=42.55 00:34:37.938 lat (msec): min=123, max=388, avg=240.37, stdev=42.54 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 124], 5.00th=[ 186], 10.00th=[ 197], 20.00th=[ 201], 00:34:37.938 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 268], 00:34:37.938 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.938 | 99.00th=[ 305], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:34:37.938 | 99.99th=[ 388] 00:34:37.938 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=60.85, samples=20 00:34:37.938 iops : min= 32, max= 96, avg=65.60, stdev=15.21, samples=20 00:34:37.938 lat (msec) : 250=54.46%, 500=45.54% 00:34:37.938 cpu : usr=96.85%, sys=2.06%, ctx=53, majf=0, minf=9 00:34:37.938 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201804: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10115msec) 00:34:37.938 slat (usec): min=11, max=289, avg=46.01, stdev=33.63 00:34:37.938 clat (msec): min=76, max=374, avg=224.41, stdev=50.08 00:34:37.938 lat (msec): min=76, max=374, avg=224.45, stdev=50.09 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 78], 5.00th=[ 136], 10.00th=[ 169], 20.00th=[ 197], 00:34:37.938 | 30.00th=[ 199], 40.00th=[ 211], 50.00th=[ 220], 60.00th=[ 232], 00:34:37.938 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 296], 00:34:37.938 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 376], 99.95th=[ 376], 00:34:37.938 | 99.99th=[ 376] 00:34:37.938 bw ( KiB/s): min= 144, max= 384, per=4.14%, avg=281.60, stdev=73.67, samples=20 00:34:37.938 iops : min= 36, max= 96, avg=70.40, stdev=18.42, samples=20 00:34:37.938 lat (msec) : 100=2.22%, 250=60.56%, 500=37.22% 00:34:37.938 cpu : usr=95.66%, sys=2.59%, ctx=56, majf=0, minf=9 00:34:37.938 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:37.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.938 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.938 filename1: (groupid=0, jobs=1): err= 0: pid=1201805: Sat Jul 27 02:34:04 2024 00:34:37.938 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10115msec) 00:34:37.938 slat (nsec): min=7475, max=93837, avg=63622.07, stdev=15523.67 00:34:37.938 clat (msec): min=122, max=386, avg=240.85, stdev=41.29 00:34:37.938 lat (msec): min=122, max=387, avg=240.91, stdev=41.30 00:34:37.938 clat percentiles (msec): 00:34:37.938 | 1.00th=[ 124], 5.00th=[ 194], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.938 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 266], 00:34:37.938 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 300], 00:34:37.938 | 99.00th=[ 305], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 388], 00:34:37.938 | 99.99th=[ 388] 00:34:37.939 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=261.60, stdev=65.51, samples=20 00:34:37.939 iops : min= 32, max= 96, avg=65.40, stdev=16.38, samples=20 00:34:37.939 lat (msec) : 250=57.31%, 500=42.69% 00:34:37.939 cpu : usr=98.09%, sys=1.39%, ctx=24, majf=0, minf=9 00:34:37.939 IO depths : 1=6.0%, 2=12.2%, 4=25.1%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename1: (groupid=0, jobs=1): err= 0: pid=1201806: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=86, BW=347KiB/s (355kB/s)(3512KiB/10126msec) 00:34:37.939 slat (nsec): min=11593, max=55210, avg=20531.76, stdev=6088.04 00:34:37.939 clat (msec): min=89, max=287, avg=183.49, stdev=32.23 00:34:37.939 lat (msec): min=89, max=287, avg=183.51, stdev=32.23 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 90], 5.00th=[ 125], 10.00th=[ 146], 20.00th=[ 159], 00:34:37.939 | 30.00th=[ 171], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:34:37.939 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 215], 95.00th=[ 222], 00:34:37.939 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 288], 00:34:37.939 | 99.99th=[ 288] 00:34:37.939 bw ( KiB/s): min= 256, max= 384, per=5.06%, avg=344.80, stdev=42.32, samples=20 00:34:37.939 iops : min= 64, max= 96, avg=86.20, stdev=10.58, samples=20 00:34:37.939 lat (msec) : 100=1.82%, 250=94.99%, 500=3.19% 00:34:37.939 cpu : usr=97.40%, sys=2.03%, ctx=11, majf=0, minf=9 00:34:37.939 IO depths : 1=1.4%, 2=4.3%, 4=14.9%, 8=68.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename1: (groupid=0, jobs=1): err= 0: pid=1201807: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10069msec) 00:34:37.939 slat (nsec): min=5712, max=73873, avg=28759.42, stdev=11797.97 00:34:37.939 clat (msec): min=156, max=310, avg=245.32, stdev=35.87 00:34:37.939 lat (msec): min=156, max=310, avg=245.35, stdev=35.87 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.939 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 268], 00:34:37.939 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.939 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:34:37.939 | 99.99th=[ 309] 00:34:37.939 bw ( KiB/s): min= 128, max= 384, per=3.75%, avg=256.00, stdev=69.26, samples=20 00:34:37.939 iops : min= 32, max= 96, avg=64.00, stdev=17.31, samples=20 00:34:37.939 lat (msec) : 250=50.91%, 500=49.09% 00:34:37.939 cpu : usr=98.05%, sys=1.54%, ctx=9, majf=0, minf=9 00:34:37.939 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename2: (groupid=0, jobs=1): err= 0: pid=1201808: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=81, BW=327KiB/s (335kB/s)(3304KiB/10112msec) 00:34:37.939 slat (nsec): min=8029, max=87780, avg=25528.07, stdev=20098.83 00:34:37.939 clat (msec): min=76, max=300, avg=194.95, stdev=36.07 00:34:37.939 lat (msec): min=76, max=300, avg=194.97, stdev=36.07 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 77], 5.00th=[ 128], 10.00th=[ 153], 20.00th=[ 176], 00:34:37.939 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:34:37.939 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 257], 00:34:37.939 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:34:37.939 | 99.99th=[ 300] 00:34:37.939 bw ( KiB/s): min= 256, max= 384, per=4.77%, avg=324.00, stdev=56.84, samples=20 00:34:37.939 iops : min= 64, max= 96, avg=81.00, stdev=14.21, samples=20 00:34:37.939 lat (msec) : 100=1.94%, 250=92.98%, 500=5.08% 00:34:37.939 cpu : usr=97.96%, sys=1.61%, ctx=19, majf=0, minf=9 00:34:37.939 IO depths : 1=3.3%, 2=7.6%, 4=19.0%, 8=60.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename2: (groupid=0, jobs=1): err= 0: pid=1201809: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=86, BW=348KiB/s (356kB/s)(3520KiB/10129msec) 00:34:37.939 slat (nsec): min=3622, max=57332, avg=15664.89, stdev=8203.97 00:34:37.939 clat (msec): min=92, max=296, avg=183.37, stdev=28.80 00:34:37.939 lat (msec): min=92, max=296, avg=183.38, stdev=28.80 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 93], 5.00th=[ 123], 10.00th=[ 146], 20.00th=[ 167], 00:34:37.939 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 194], 00:34:37.939 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 220], 00:34:37.939 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 296], 99.95th=[ 296], 00:34:37.939 | 99.99th=[ 296] 00:34:37.939 bw ( KiB/s): min= 256, max= 384, per=5.08%, avg=345.60, stdev=53.55, samples=20 00:34:37.939 iops : min= 64, max= 96, avg=86.40, stdev=13.39, samples=20 00:34:37.939 lat (msec) : 100=1.82%, 250=97.05%, 500=1.14% 00:34:37.939 cpu : usr=98.11%, sys=1.53%, ctx=12, majf=0, minf=9 00:34:37.939 IO depths : 1=1.5%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename2: (groupid=0, jobs=1): err= 0: pid=1201810: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=66, BW=267KiB/s (273kB/s)(2688KiB/10072msec) 00:34:37.939 slat (usec): min=12, max=263, avg=67.44, stdev=24.20 00:34:37.939 clat (msec): min=77, max=310, avg=239.20, stdev=43.75 00:34:37.939 lat (msec): min=77, max=310, avg=239.26, stdev=43.76 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 78], 5.00th=[ 197], 10.00th=[ 197], 20.00th=[ 201], 00:34:37.939 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 266], 00:34:37.939 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.939 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:34:37.939 | 99.99th=[ 309] 00:34:37.939 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=65.33, samples=20 00:34:37.939 iops : min= 32, max= 96, avg=65.60, stdev=16.33, samples=20 00:34:37.939 lat (msec) : 100=2.38%, 250=52.38%, 500=45.24% 00:34:37.939 cpu : usr=95.80%, sys=2.50%, ctx=86, majf=0, minf=9 00:34:37.939 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename2: (groupid=0, jobs=1): err= 0: pid=1201811: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10098msec) 00:34:37.939 slat (usec): min=17, max=138, avg=53.66, stdev=19.24 00:34:37.939 clat (msec): min=115, max=390, avg=245.83, stdev=47.86 00:34:37.939 lat (msec): min=115, max=390, avg=245.88, stdev=47.86 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 122], 5.00th=[ 180], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.939 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 251], 60.00th=[ 268], 00:34:37.939 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 338], 00:34:37.939 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:34:37.939 | 99.99th=[ 393] 00:34:37.939 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=256.00, stdev=69.26, samples=20 00:34:37.939 iops : min= 32, max= 96, avg=64.00, stdev=17.31, samples=20 00:34:37.939 lat (msec) : 250=49.70%, 500=50.30% 00:34:37.939 cpu : usr=96.14%, sys=2.34%, ctx=72, majf=0, minf=10 00:34:37.939 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:34:37.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.939 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.939 filename2: (groupid=0, jobs=1): err= 0: pid=1201812: Sat Jul 27 02:34:04 2024 00:34:37.939 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10106msec) 00:34:37.939 slat (nsec): min=6200, max=77743, avg=30916.27, stdev=16225.17 00:34:37.939 clat (msec): min=121, max=299, avg=240.32, stdev=39.05 00:34:37.939 lat (msec): min=121, max=299, avg=240.36, stdev=39.05 00:34:37.939 clat percentiles (msec): 00:34:37.939 | 1.00th=[ 122], 5.00th=[ 194], 10.00th=[ 197], 20.00th=[ 199], 00:34:37.939 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 266], 00:34:37.939 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 292], 00:34:37.939 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:34:37.939 | 99.99th=[ 300] 00:34:37.939 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=262.40, stdev=65.33, samples=20 00:34:37.940 iops : min= 32, max= 96, avg=65.60, stdev=16.33, samples=20 00:34:37.940 lat (msec) : 250=52.38%, 500=47.62% 00:34:37.940 cpu : usr=98.37%, sys=1.22%, ctx=38, majf=0, minf=9 00:34:37.940 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:37.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.940 filename2: (groupid=0, jobs=1): err= 0: pid=1201813: Sat Jul 27 02:34:04 2024 00:34:37.940 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10112msec) 00:34:37.940 slat (usec): min=10, max=101, avg=44.70, stdev=24.29 00:34:37.940 clat (msec): min=117, max=328, avg=223.76, stdev=40.70 00:34:37.940 lat (msec): min=117, max=328, avg=223.80, stdev=40.70 00:34:37.940 clat percentiles (msec): 00:34:37.940 | 1.00th=[ 118], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 194], 00:34:37.940 | 30.00th=[ 199], 40.00th=[ 199], 50.00th=[ 215], 60.00th=[ 222], 00:34:37.940 | 70.00th=[ 253], 80.00th=[ 268], 90.00th=[ 284], 95.00th=[ 292], 00:34:37.940 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 330], 00:34:37.940 | 99.99th=[ 330] 00:34:37.940 bw ( KiB/s): min= 128, max= 384, per=4.14%, avg=281.60, stdev=71.82, samples=20 00:34:37.940 iops : min= 32, max= 96, avg=70.40, stdev=17.95, samples=20 00:34:37.940 lat (msec) : 250=69.72%, 500=30.28% 00:34:37.940 cpu : usr=97.97%, sys=1.45%, ctx=49, majf=0, minf=9 00:34:37.940 IO depths : 1=2.2%, 2=8.1%, 4=23.8%, 8=55.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:34:37.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.940 filename2: (groupid=0, jobs=1): err= 0: pid=1201814: Sat Jul 27 02:34:04 2024 00:34:37.940 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10096msec) 00:34:37.940 slat (usec): min=21, max=109, avg=65.40, stdev=16.60 00:34:37.940 clat (msec): min=195, max=336, avg=244.93, stdev=35.72 00:34:37.940 lat (msec): min=195, max=336, avg=244.99, stdev=35.72 00:34:37.940 clat percentiles (msec): 00:34:37.940 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.940 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 266], 00:34:37.940 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.940 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 338], 00:34:37.940 | 99.99th=[ 338] 00:34:37.940 bw ( KiB/s): min= 144, max= 368, per=3.75%, avg=256.00, stdev=63.58, samples=20 00:34:37.940 iops : min= 36, max= 92, avg=64.00, stdev=15.89, samples=20 00:34:37.940 lat (msec) : 250=50.91%, 500=49.09% 00:34:37.940 cpu : usr=96.22%, sys=2.30%, ctx=88, majf=0, minf=9 00:34:37.940 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:37.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.940 filename2: (groupid=0, jobs=1): err= 0: pid=1201815: Sat Jul 27 02:34:04 2024 00:34:37.940 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10102msec) 00:34:37.940 slat (nsec): min=3706, max=66478, avg=28011.59, stdev=7881.41 00:34:37.940 clat (msec): min=196, max=310, avg=245.32, stdev=35.33 00:34:37.940 lat (msec): min=196, max=310, avg=245.35, stdev=35.33 00:34:37.940 clat percentiles (msec): 00:34:37.940 | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 213], 00:34:37.940 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 266], 00:34:37.940 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:34:37.940 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:34:37.940 | 99.99th=[ 309] 00:34:37.940 bw ( KiB/s): min= 144, max= 384, per=3.75%, avg=256.00, stdev=53.70, samples=20 00:34:37.940 iops : min= 36, max= 96, avg=64.00, stdev=13.42, samples=20 00:34:37.940 lat (msec) : 250=50.61%, 500=49.39% 00:34:37.940 cpu : usr=98.08%, sys=1.57%, ctx=16, majf=0, minf=9 00:34:37.940 IO depths : 1=2.0%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:34:37.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.940 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.940 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:37.940 00:34:37.940 Run status group 0 (all jobs): 00:34:37.940 READ: bw=6793KiB/s (6956kB/s), 260KiB/s-348KiB/s (266kB/s-356kB/s), io=67.2MiB (70.5MB), run=10056-10129msec 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 bdev_null0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.940 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 [2024-07-27 02:34:04.898056] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 bdev_null1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:37.941 { 00:34:37.941 "params": { 00:34:37.941 "name": "Nvme$subsystem", 00:34:37.941 "trtype": "$TEST_TRANSPORT", 00:34:37.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.941 "adrfam": "ipv4", 00:34:37.941 "trsvcid": "$NVMF_PORT", 00:34:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.941 "hdgst": ${hdgst:-false}, 00:34:37.941 "ddgst": ${ddgst:-false} 00:34:37.941 }, 00:34:37.941 "method": "bdev_nvme_attach_controller" 00:34:37.941 } 00:34:37.941 EOF 00:34:37.941 )") 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:37.941 { 00:34:37.941 "params": { 00:34:37.941 "name": "Nvme$subsystem", 00:34:37.941 "trtype": "$TEST_TRANSPORT", 00:34:37.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.941 "adrfam": "ipv4", 00:34:37.941 "trsvcid": "$NVMF_PORT", 00:34:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.941 "hdgst": ${hdgst:-false}, 00:34:37.941 "ddgst": ${ddgst:-false} 00:34:37.941 }, 00:34:37.941 "method": "bdev_nvme_attach_controller" 00:34:37.941 } 00:34:37.941 EOF 00:34:37.941 )") 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:37.941 "params": { 00:34:37.941 "name": "Nvme0", 00:34:37.941 "trtype": "tcp", 00:34:37.941 "traddr": "10.0.0.2", 00:34:37.941 "adrfam": "ipv4", 00:34:37.941 "trsvcid": "4420", 00:34:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.941 "hdgst": false, 00:34:37.941 "ddgst": false 00:34:37.941 }, 00:34:37.941 "method": "bdev_nvme_attach_controller" 00:34:37.941 },{ 00:34:37.941 "params": { 00:34:37.941 "name": "Nvme1", 00:34:37.941 "trtype": "tcp", 00:34:37.941 "traddr": "10.0.0.2", 00:34:37.941 "adrfam": "ipv4", 00:34:37.941 "trsvcid": "4420", 00:34:37.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:37.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:37.941 "hdgst": false, 00:34:37.941 "ddgst": false 00:34:37.941 }, 00:34:37.941 "method": "bdev_nvme_attach_controller" 00:34:37.941 }' 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.941 02:34:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.941 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:37.941 ... 00:34:37.941 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:37.941 ... 00:34:37.941 fio-3.35 00:34:37.941 Starting 4 threads 00:34:37.941 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.206 00:34:43.206 filename0: (groupid=0, jobs=1): err= 0: pid=1203200: Sat Jul 27 02:34:11 2024 00:34:43.206 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5001msec) 00:34:43.206 slat (nsec): min=4261, max=32761, avg=10946.14, stdev=3146.26 00:34:43.206 clat (usec): min=2042, max=8145, avg=4212.93, stdev=733.36 00:34:43.206 lat (usec): min=2050, max=8158, avg=4223.87, stdev=732.98 00:34:43.206 clat percentiles (usec): 00:34:43.206 | 1.00th=[ 3294], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3818], 00:34:43.206 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:34:43.206 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 5735], 95.00th=[ 5932], 00:34:43.206 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7635], 99.95th=[ 7898], 00:34:43.206 | 99.99th=[ 8160] 00:34:43.206 bw ( KiB/s): min=14864, max=15568, per=23.96%, avg=15070.22, stdev=217.39, samples=9 00:34:43.206 iops : min= 1858, max= 1946, avg=1883.78, stdev=27.17, samples=9 00:34:43.206 lat (msec) : 4=57.58%, 10=42.42% 00:34:43.206 cpu : usr=94.12%, sys=5.42%, ctx=8, majf=0, minf=9 00:34:43.206 IO depths : 1=0.1%, 2=0.3%, 4=72.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.206 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.206 issued rwts: total=9423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.206 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:43.206 filename0: (groupid=0, jobs=1): err= 0: pid=1203201: Sat Jul 27 02:34:11 2024 00:34:43.206 read: IOPS=1953, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:34:43.206 slat (nsec): min=4204, max=36616, avg=11120.01, stdev=3247.47 00:34:43.206 clat (usec): min=2392, max=6825, avg=4059.11, stdev=596.08 00:34:43.206 lat (usec): min=2402, max=6842, avg=4070.23, stdev=595.91 00:34:43.206 clat percentiles (usec): 00:34:43.206 | 1.00th=[ 2966], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3752], 00:34:43.206 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:34:43.206 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4621], 95.00th=[ 5735], 00:34:43.206 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6587], 99.95th=[ 6652], 00:34:43.206 | 99.99th=[ 6849] 00:34:43.206 bw ( KiB/s): min=15024, max=16128, per=24.84%, avg=15624.00, stdev=428.68, samples=10 00:34:43.206 iops : min= 1878, max= 2016, avg=1953.00, stdev=53.58, samples=10 00:34:43.206 lat (msec) : 4=58.89%, 10=41.11% 00:34:43.206 cpu : usr=93.58%, sys=5.80%, ctx=72, majf=0, minf=0 00:34:43.206 IO depths : 1=0.1%, 2=3.9%, 4=68.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.206 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.206 issued rwts: total=9773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.206 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:43.206 filename1: (groupid=0, jobs=1): err= 0: pid=1203202: Sat Jul 27 02:34:11 2024 00:34:43.206 read: IOPS=1993, BW=15.6MiB/s (16.3MB/s)(77.9MiB/5001msec) 00:34:43.206 slat (nsec): min=4494, max=34664, avg=11650.40, stdev=3566.46 00:34:43.206 clat (usec): min=1346, max=6403, avg=3979.97, stdev=439.56 00:34:43.206 lat (usec): min=1354, max=6411, avg=3991.62, stdev=439.29 00:34:43.206 clat percentiles (usec): 00:34:43.206 | 1.00th=[ 3130], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3752], 00:34:43.206 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:34:43.206 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4621], 00:34:43.207 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6325], 99.95th=[ 6390], 00:34:43.207 | 99.99th=[ 6390] 00:34:43.207 bw ( KiB/s): min=15696, max=16256, per=25.49%, avg=16032.00, stdev=206.30, samples=9 00:34:43.207 iops : min= 1962, max= 2032, avg=2004.00, stdev=25.79, samples=9 00:34:43.207 lat (msec) : 2=0.04%, 4=59.04%, 10=40.92% 00:34:43.207 cpu : usr=92.06%, sys=7.40%, ctx=17, majf=0, minf=9 00:34:43.207 IO depths : 1=0.1%, 2=0.7%, 4=69.3%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.207 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.207 issued rwts: total=9969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:43.207 filename1: (groupid=0, jobs=1): err= 0: pid=1203203: Sat Jul 27 02:34:11 2024 00:34:43.207 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5003msec) 00:34:43.207 slat (usec): min=4, max=535, avg=12.22, stdev=10.82 00:34:43.207 clat (usec): min=1877, max=9575, avg=3894.26, stdev=488.99 00:34:43.207 lat (usec): min=1885, max=9591, avg=3906.48, stdev=489.06 00:34:43.207 clat percentiles (usec): 00:34:43.207 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3392], 20.00th=[ 3621], 00:34:43.207 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 3982], 00:34:43.207 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4817], 00:34:43.207 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 6849], 99.95th=[ 7767], 00:34:43.207 | 99.99th=[ 7767] 00:34:43.207 bw ( KiB/s): min=15824, max=17328, per=25.86%, avg=16265.60, stdev=430.73, samples=10 00:34:43.207 iops : min= 1978, max= 2166, avg=2033.20, stdev=53.84, samples=10 00:34:43.207 lat (msec) : 2=0.06%, 4=64.76%, 10=35.18% 00:34:43.207 cpu : usr=66.47%, sys=15.79%, ctx=201, majf=0, minf=0 00:34:43.207 IO depths : 1=0.2%, 2=7.6%, 4=63.2%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.207 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.207 issued rwts: total=10174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:43.207 00:34:43.207 Run status group 0 (all jobs): 00:34:43.207 READ: bw=61.4MiB/s (64.4MB/s), 14.7MiB/s-15.9MiB/s (15.4MB/s-16.7MB/s), io=307MiB (322MB), run=5001-5003msec 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 00:34:43.207 real 0m24.504s 00:34:43.207 user 4m30.766s 00:34:43.207 sys 0m8.711s 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 ************************************ 00:34:43.207 END TEST fio_dif_rand_params 00:34:43.207 ************************************ 00:34:43.207 02:34:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:43.207 02:34:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:43.207 02:34:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 ************************************ 00:34:43.207 START TEST fio_dif_digest 00:34:43.207 ************************************ 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 bdev_null0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 [2024-07-27 02:34:11.351539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:43.207 { 00:34:43.207 "params": { 00:34:43.207 "name": "Nvme$subsystem", 00:34:43.207 "trtype": "$TEST_TRANSPORT", 00:34:43.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.207 "adrfam": "ipv4", 00:34:43.207 "trsvcid": "$NVMF_PORT", 00:34:43.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.207 "hdgst": ${hdgst:-false}, 00:34:43.207 "ddgst": ${ddgst:-false} 00:34:43.207 }, 00:34:43.207 "method": "bdev_nvme_attach_controller" 00:34:43.207 } 00:34:43.207 EOF 00:34:43.207 )") 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:43.207 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:43.208 02:34:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:43.208 "params": { 00:34:43.208 "name": "Nvme0", 00:34:43.208 "trtype": "tcp", 00:34:43.208 "traddr": "10.0.0.2", 00:34:43.208 "adrfam": "ipv4", 00:34:43.208 "trsvcid": "4420", 00:34:43.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:43.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:43.208 "hdgst": true, 00:34:43.208 "ddgst": true 00:34:43.208 }, 00:34:43.208 "method": "bdev_nvme_attach_controller" 00:34:43.208 }' 00:34:43.465 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.465 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.465 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:43.466 02:34:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.466 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:43.466 ... 00:34:43.466 fio-3.35 00:34:43.466 Starting 3 threads 00:34:43.723 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.917 00:34:55.917 filename0: (groupid=0, jobs=1): err= 0: pid=1204005: Sat Jul 27 02:34:22 2024 00:34:55.917 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10046msec) 00:34:55.917 slat (nsec): min=6619, max=37052, avg=14052.59, stdev=3412.29 00:34:55.917 clat (usec): min=6041, max=57612, avg=15337.00, stdev=3772.88 00:34:55.917 lat (usec): min=6054, max=57626, avg=15351.06, stdev=3772.93 00:34:55.917 clat percentiles (usec): 00:34:55.917 | 1.00th=[ 7439], 5.00th=[10552], 10.00th=[11469], 20.00th=[13829], 00:34:55.917 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:34:55.917 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:34:55.917 | 99.00th=[19530], 99.50th=[52167], 99.90th=[56886], 99.95th=[57410], 00:34:55.917 | 99.99th=[57410] 00:34:55.917 bw ( KiB/s): min=22528, max=26880, per=37.11%, avg=25065.05, stdev=1192.63, samples=20 00:34:55.917 iops : min= 176, max= 210, avg=195.80, stdev= 9.29, samples=20 00:34:55.917 lat (msec) : 10=3.01%, 20=96.33%, 50=0.15%, 100=0.51% 00:34:55.917 cpu : usr=90.34%, sys=8.90%, ctx=23, majf=0, minf=158 00:34:55.917 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.917 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.917 filename0: (groupid=0, jobs=1): err= 0: pid=1204006: Sat Jul 27 02:34:22 2024 00:34:55.917 read: IOPS=162, BW=20.4MiB/s (21.4MB/s)(205MiB/10048msec) 00:34:55.917 slat (nsec): min=6047, max=87663, avg=13943.54, stdev=4358.30 00:34:55.917 clat (usec): min=9799, max=95960, avg=18367.34, stdev=8862.19 00:34:55.917 lat (usec): min=9812, max=95973, avg=18381.29, stdev=8862.18 00:34:55.917 clat percentiles (usec): 00:34:55.917 | 1.00th=[11600], 5.00th=[13829], 10.00th=[14877], 20.00th=[15533], 00:34:55.917 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:34:55.917 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18482], 95.00th=[20579], 00:34:55.917 | 99.00th=[58459], 99.50th=[59507], 99.90th=[60556], 99.95th=[95945], 00:34:55.917 | 99.99th=[95945] 00:34:55.917 bw ( KiB/s): min=18176, max=23808, per=30.98%, avg=20928.00, stdev=1743.96, samples=20 00:34:55.917 iops : min= 142, max= 186, avg=163.50, stdev=13.62, samples=20 00:34:55.917 lat (msec) : 10=0.06%, 20=94.87%, 50=0.43%, 100=4.64% 00:34:55.917 cpu : usr=90.59%, sys=8.94%, ctx=23, majf=0, minf=174 00:34:55.917 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 issued rwts: total=1637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.917 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.917 filename0: (groupid=0, jobs=1): err= 0: pid=1204007: Sat Jul 27 02:34:22 2024 00:34:55.917 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(213MiB/10046msec) 00:34:55.917 slat (nsec): min=5969, max=46881, avg=14743.73, stdev=4803.55 00:34:55.917 clat (usec): min=6997, max=61327, avg=17631.69, stdev=5509.07 00:34:55.917 lat (usec): min=7011, max=61340, avg=17646.44, stdev=5509.20 00:34:55.917 clat percentiles (usec): 00:34:55.917 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12649], 20.00th=[15533], 00:34:55.917 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:34:55.917 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19792], 95.00th=[20579], 00:34:55.917 | 99.00th=[56361], 99.50th=[57410], 99.90th=[60556], 99.95th=[61080], 00:34:55.917 | 99.99th=[61080] 00:34:55.917 bw ( KiB/s): min=19968, max=23808, per=32.27%, avg=21798.40, stdev=1127.49, samples=20 00:34:55.917 iops : min= 156, max= 186, avg=170.30, stdev= 8.81, samples=20 00:34:55.917 lat (msec) : 10=0.59%, 20=90.15%, 50=7.80%, 100=1.47% 00:34:55.917 cpu : usr=90.79%, sys=8.23%, ctx=101, majf=0, minf=126 00:34:55.917 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.917 issued rwts: total=1705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.917 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:55.917 00:34:55.917 Run status group 0 (all jobs): 00:34:55.917 READ: bw=66.0MiB/s (69.2MB/s), 20.4MiB/s-24.4MiB/s (21.4MB/s-25.6MB/s), io=663MiB (695MB), run=10046-10048msec 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.917 00:34:55.917 real 0m11.068s 00:34:55.917 user 0m28.301s 00:34:55.917 sys 0m2.896s 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:55.917 02:34:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.917 ************************************ 00:34:55.917 END TEST fio_dif_digest 00:34:55.917 ************************************ 00:34:55.917 02:34:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:55.917 02:34:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:55.917 rmmod nvme_tcp 00:34:55.917 rmmod nvme_fabrics 00:34:55.917 rmmod nvme_keyring 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1197908 ']' 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1197908 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1197908 ']' 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1197908 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1197908 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1197908' 00:34:55.917 killing process with pid 1197908 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1197908 00:34:55.917 02:34:22 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1197908 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:55.917 02:34:22 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:55.917 Waiting for block devices as requested 00:34:55.917 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:55.917 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:55.917 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:56.176 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:56.176 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:56.176 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:56.176 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:56.449 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:56.449 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:56.449 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:56.449 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:56.735 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:56.735 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:56.735 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:56.735 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:56.993 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:56.993 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:56.993 02:34:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:56.993 02:34:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:56.993 02:34:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:56.993 02:34:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:56.993 02:34:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.993 02:34:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:56.993 02:34:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.527 02:34:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:59.527 00:34:59.527 real 1m6.642s 00:34:59.527 user 6m26.537s 00:34:59.527 sys 0m20.904s 00:34:59.527 02:34:27 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:59.527 02:34:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.527 ************************************ 00:34:59.527 END TEST nvmf_dif 00:34:59.527 ************************************ 00:34:59.527 02:34:27 -- spdk/autotest.sh@299 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:59.527 02:34:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:59.527 02:34:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:59.527 02:34:27 -- common/autotest_common.sh@10 -- # set +x 00:34:59.527 ************************************ 00:34:59.527 START TEST nvmf_abort_qd_sizes 00:34:59.527 ************************************ 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:59.527 * Looking for test storage... 00:34:59.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.527 02:34:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.429 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:01.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:01.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:01.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:01.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:35:01.430 00:35:01.430 --- 10.0.0.2 ping statistics --- 00:35:01.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.430 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:35:01.430 00:35:01.430 --- 10.0.0.1 ping statistics --- 00:35:01.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.430 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:01.430 02:34:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:02.365 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:02.365 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:02.365 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:02.365 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:02.365 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:02.365 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:02.623 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:02.623 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:02.623 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:02.623 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:03.561 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1208854 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1208854 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1208854 ']' 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.561 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.561 [2024-07-27 02:34:31.649191] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:03.561 [2024-07-27 02:34:31.649277] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.561 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.561 [2024-07-27 02:34:31.690362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:03.561 [2024-07-27 02:34:31.721822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.824 [2024-07-27 02:34:31.815319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.824 [2024-07-27 02:34:31.815389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.824 [2024-07-27 02:34:31.815414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.824 [2024-07-27 02:34:31.815428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.824 [2024-07-27 02:34:31.815440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.824 [2024-07-27 02:34:31.818086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.824 [2024-07-27 02:34:31.818129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.824 [2024-07-27 02:34:31.818221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.824 [2024-07-27 02:34:31.818224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.824 02:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.824 ************************************ 00:35:03.824 START TEST spdk_target_abort 00:35:03.824 ************************************ 00:35:03.824 02:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:03.824 02:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:03.824 02:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:03.824 02:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.824 02:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.100 spdk_targetn1 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.100 [2024-07-27 02:34:34.819804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:07.100 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.101 [2024-07-27 02:34:34.852086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:07.101 02:34:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:07.101 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.379 Initializing NVMe Controllers 00:35:10.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:10.379 Initialization complete. Launching workers. 00:35:10.379 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9712, failed: 0 00:35:10.379 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1177, failed to submit 8535 00:35:10.379 success 814, unsuccess 363, failed 0 00:35:10.379 02:34:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:10.379 02:34:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.379 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.660 Initializing NVMe Controllers 00:35:13.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:13.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:13.660 Initialization complete. Launching workers. 00:35:13.660 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8522, failed: 0 00:35:13.660 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7268 00:35:13.660 success 323, unsuccess 931, failed 0 00:35:13.660 02:34:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.660 02:34:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:13.660 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.938 Initializing NVMe Controllers 00:35:16.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.938 Initialization complete. Launching workers. 00:35:16.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31179, failed: 0 00:35:16.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2664, failed to submit 28515 00:35:16.938 success 520, unsuccess 2144, failed 0 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.938 02:34:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:17.870 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.870 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1208854 00:35:17.870 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1208854 ']' 00:35:17.870 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1208854 00:35:17.870 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:17.871 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:17.871 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1208854 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1208854' 00:35:18.129 killing process with pid 1208854 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1208854 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1208854 00:35:18.129 00:35:18.129 real 0m14.286s 00:35:18.129 user 0m53.658s 00:35:18.129 sys 0m2.827s 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:18.129 02:34:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:18.129 ************************************ 00:35:18.129 END TEST spdk_target_abort 00:35:18.129 ************************************ 00:35:18.129 02:34:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:18.129 02:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:18.129 02:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:18.129 02:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:18.387 ************************************ 00:35:18.387 START TEST kernel_target_abort 00:35:18.387 ************************************ 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:18.387 02:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:19.321 Waiting for block devices as requested 00:35:19.321 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:19.584 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:19.584 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:19.862 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:19.862 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:19.862 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:19.862 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:19.862 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.121 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:20.121 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.121 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.121 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.379 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.379 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.379 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.379 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.637 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:20.637 No valid GPT data, bailing 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:20.637 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:20.895 00:35:20.895 Discovery Log Number of Records 2, Generation counter 2 00:35:20.895 =====Discovery Log Entry 0====== 00:35:20.895 trtype: tcp 00:35:20.895 adrfam: ipv4 00:35:20.895 subtype: current discovery subsystem 00:35:20.895 treq: not specified, sq flow control disable supported 00:35:20.895 portid: 1 00:35:20.895 trsvcid: 4420 00:35:20.895 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:20.895 traddr: 10.0.0.1 00:35:20.895 eflags: none 00:35:20.895 sectype: none 00:35:20.895 =====Discovery Log Entry 1====== 00:35:20.895 trtype: tcp 00:35:20.895 adrfam: ipv4 00:35:20.895 subtype: nvme subsystem 00:35:20.895 treq: not specified, sq flow control disable supported 00:35:20.895 portid: 1 00:35:20.895 trsvcid: 4420 00:35:20.895 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:20.895 traddr: 10.0.0.1 00:35:20.895 eflags: none 00:35:20.895 sectype: none 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.895 02:34:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.895 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.172 Initializing NVMe Controllers 00:35:24.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:24.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:24.172 Initialization complete. Launching workers. 00:35:24.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28586, failed: 0 00:35:24.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28586, failed to submit 0 00:35:24.172 success 0, unsuccess 28586, failed 0 00:35:24.172 02:34:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.172 02:34:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.172 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.450 Initializing NVMe Controllers 00:35:27.450 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.450 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.450 Initialization complete. Launching workers. 00:35:27.450 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58684, failed: 0 00:35:27.450 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14782, failed to submit 43902 00:35:27.450 success 0, unsuccess 14782, failed 0 00:35:27.450 02:34:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.450 02:34:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.450 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.729 Initializing NVMe Controllers 00:35:30.729 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.729 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.729 Initialization complete. Launching workers. 00:35:30.729 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56520, failed: 0 00:35:30.729 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14106, failed to submit 42414 00:35:30.729 success 0, unsuccess 14106, failed 0 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:30.729 02:34:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:31.295 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:31.295 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:31.295 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:31.295 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:31.295 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:31.552 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:31.552 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:31.552 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:31.552 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:31.552 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:32.488 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:32.488 00:35:32.488 real 0m14.234s 00:35:32.488 user 0m4.677s 00:35:32.488 sys 0m3.407s 00:35:32.488 02:35:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.488 02:35:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.488 ************************************ 00:35:32.488 END TEST kernel_target_abort 00:35:32.488 ************************************ 00:35:32.488 02:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:32.488 02:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:32.489 rmmod nvme_tcp 00:35:32.489 rmmod nvme_fabrics 00:35:32.489 rmmod nvme_keyring 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1208854 ']' 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1208854 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1208854 ']' 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1208854 00:35:32.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1208854) - No such process 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1208854 is not found' 00:35:32.489 Process with pid 1208854 is not found 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:32.489 02:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:33.424 Waiting for block devices as requested 00:35:33.683 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:33.683 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:33.941 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:33.941 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:33.941 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:33.941 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:34.201 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:34.201 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:34.201 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:34.201 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:34.460 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:34.460 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:34.460 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:34.460 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:34.719 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:34.719 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:34.719 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:34.978 02:35:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.877 02:35:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:36.877 00:35:36.877 real 0m37.756s 00:35:36.877 user 1m0.408s 00:35:36.877 sys 0m9.514s 00:35:36.877 02:35:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:36.877 02:35:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.877 ************************************ 00:35:36.877 END TEST nvmf_abort_qd_sizes 00:35:36.877 ************************************ 00:35:36.877 02:35:04 -- spdk/autotest.sh@301 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:36.877 02:35:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:36.877 02:35:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:36.877 02:35:04 -- common/autotest_common.sh@10 -- # set +x 00:35:36.877 ************************************ 00:35:36.877 START TEST keyring_file 00:35:36.877 ************************************ 00:35:36.877 02:35:05 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:37.136 * Looking for test storage... 00:35:37.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:37.136 02:35:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:37.136 02:35:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.136 02:35:05 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.136 02:35:05 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.136 02:35:05 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.136 02:35:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.136 02:35:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.136 02:35:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.136 02:35:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:37.136 02:35:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:37.136 02:35:05 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:37.136 02:35:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:37.136 02:35:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:37.136 02:35:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:37.136 02:35:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MoNptaynWa 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MoNptaynWa 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MoNptaynWa 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MoNptaynWa 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DKwezpmQrT 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:37.137 02:35:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DKwezpmQrT 00:35:37.137 02:35:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DKwezpmQrT 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DKwezpmQrT 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=1214601 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:37.137 02:35:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1214601 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1214601 ']' 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.137 02:35:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:37.137 [2024-07-27 02:35:05.215311] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:37.137 [2024-07-27 02:35:05.215404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214601 ] 00:35:37.137 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.137 [2024-07-27 02:35:05.246741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:37.137 [2024-07-27 02:35:05.276949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.396 [2024-07-27 02:35:05.371281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:37.655 02:35:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:37.655 [2024-07-27 02:35:05.626947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.655 null0 00:35:37.655 [2024-07-27 02:35:05.659015] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:37.655 [2024-07-27 02:35:05.659541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:37.655 [2024-07-27 02:35:05.667013] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.655 02:35:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:37.655 [2024-07-27 02:35:05.679036] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:37.655 request: 00:35:37.655 { 00:35:37.655 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.655 "secure_channel": false, 00:35:37.655 "listen_address": { 00:35:37.655 "trtype": "tcp", 00:35:37.655 "traddr": "127.0.0.1", 00:35:37.655 "trsvcid": "4420" 00:35:37.655 }, 00:35:37.655 "method": "nvmf_subsystem_add_listener", 00:35:37.655 "req_id": 1 00:35:37.655 } 00:35:37.655 Got JSON-RPC error response 00:35:37.655 response: 00:35:37.655 { 00:35:37.655 "code": -32602, 00:35:37.655 "message": "Invalid parameters" 00:35:37.655 } 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:37.655 02:35:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=1214620 00:35:37.655 02:35:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:37.655 02:35:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1214620 /var/tmp/bperf.sock 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1214620 ']' 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:37.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.655 02:35:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:37.655 [2024-07-27 02:35:05.726995] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:37.655 [2024-07-27 02:35:05.727080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214620 ] 00:35:37.655 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.655 [2024-07-27 02:35:05.758225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:37.655 [2024-07-27 02:35:05.789820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.913 [2024-07-27 02:35:05.881171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.913 02:35:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.913 02:35:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:37.913 02:35:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:37.913 02:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:38.172 02:35:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DKwezpmQrT 00:35:38.172 02:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DKwezpmQrT 00:35:38.430 02:35:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:38.430 02:35:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:38.430 02:35:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.430 02:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.430 02:35:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.689 02:35:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.MoNptaynWa == \/\t\m\p\/\t\m\p\.\M\o\N\p\t\a\y\n\W\a ]] 00:35:38.689 02:35:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:38.689 02:35:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:38.689 02:35:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.689 02:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.689 02:35:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:38.947 02:35:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DKwezpmQrT == \/\t\m\p\/\t\m\p\.\D\K\w\e\z\p\m\Q\r\T ]] 00:35:38.947 02:35:06 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:38.947 02:35:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.947 02:35:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.947 02:35:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.947 02:35:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.947 02:35:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.204 02:35:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:39.204 02:35:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:39.204 02:35:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.204 02:35:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.204 02:35:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.205 02:35:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.205 02:35:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:39.463 02:35:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:39.463 02:35:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.463 02:35:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.721 [2024-07-27 02:35:07.720669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:39.721 nvme0n1 00:35:39.721 02:35:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:39.721 02:35:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.721 02:35:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.721 02:35:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.721 02:35:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.721 02:35:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.988 02:35:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:39.988 02:35:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:39.988 02:35:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.988 02:35:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.988 02:35:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.988 02:35:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.988 02:35:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:40.310 02:35:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:40.310 02:35:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.310 Running I/O for 1 seconds... 00:35:41.684 00:35:41.684 Latency(us) 00:35:41.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.684 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:41.684 nvme0n1 : 1.02 4504.71 17.60 0.00 0.00 28159.71 6262.33 36700.16 00:35:41.684 =================================================================================================================== 00:35:41.684 Total : 4504.71 17.60 0.00 0.00 28159.71 6262.33 36700.16 00:35:41.684 0 00:35:41.684 02:35:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:41.684 02:35:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.684 02:35:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.941 02:35:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:41.941 02:35:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:41.941 02:35:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:41.941 02:35:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.941 02:35:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.941 02:35:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.941 02:35:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.196 02:35:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:42.196 02:35:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:42.196 02:35:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.197 02:35:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:42.197 02:35:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:42.452 [2024-07-27 02:35:10.467277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:42.452 [2024-07-27 02:35:10.467738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10567b0 (107): Transport endpoint is not connected 00:35:42.452 [2024-07-27 02:35:10.468726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10567b0 (9): Bad file descriptor 00:35:42.452 [2024-07-27 02:35:10.469723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:42.452 [2024-07-27 02:35:10.469746] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:42.452 [2024-07-27 02:35:10.469761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:42.452 request: 00:35:42.452 { 00:35:42.452 "name": "nvme0", 00:35:42.452 "trtype": "tcp", 00:35:42.452 "traddr": "127.0.0.1", 00:35:42.452 "adrfam": "ipv4", 00:35:42.452 "trsvcid": "4420", 00:35:42.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.452 "prchk_reftag": false, 00:35:42.452 "prchk_guard": false, 00:35:42.452 "hdgst": false, 00:35:42.452 "ddgst": false, 00:35:42.452 "psk": "key1", 00:35:42.452 "method": "bdev_nvme_attach_controller", 00:35:42.452 "req_id": 1 00:35:42.452 } 00:35:42.452 Got JSON-RPC error response 00:35:42.452 response: 00:35:42.452 { 00:35:42.453 "code": -5, 00:35:42.453 "message": "Input/output error" 00:35:42.453 } 00:35:42.453 02:35:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:42.453 02:35:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:42.453 02:35:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:42.453 02:35:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:42.453 02:35:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:42.453 02:35:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.453 02:35:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.453 02:35:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.453 02:35:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.453 02:35:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.709 02:35:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:42.709 02:35:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:42.709 02:35:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:42.709 02:35:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.709 02:35:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.709 02:35:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.709 02:35:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:42.966 02:35:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:42.966 02:35:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:42.966 02:35:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:43.224 02:35:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:43.224 02:35:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:43.480 02:35:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:43.480 02:35:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:43.480 02:35:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.737 02:35:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:43.737 02:35:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.MoNptaynWa 00:35:43.737 02:35:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.737 02:35:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:43.738 02:35:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:43.995 [2024-07-27 02:35:11.967587] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MoNptaynWa': 0100660 00:35:43.995 [2024-07-27 02:35:11.967624] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:43.995 request: 00:35:43.995 { 00:35:43.995 "name": "key0", 00:35:43.995 "path": "/tmp/tmp.MoNptaynWa", 00:35:43.995 "method": "keyring_file_add_key", 00:35:43.995 "req_id": 1 00:35:43.995 } 00:35:43.995 Got JSON-RPC error response 00:35:43.995 response: 00:35:43.995 { 00:35:43.995 "code": -1, 00:35:43.995 "message": "Operation not permitted" 00:35:43.995 } 00:35:43.995 02:35:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:43.995 02:35:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.995 02:35:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.995 02:35:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.995 02:35:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.MoNptaynWa 00:35:43.995 02:35:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:43.995 02:35:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MoNptaynWa 00:35:44.252 02:35:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.MoNptaynWa 00:35:44.252 02:35:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:44.252 02:35:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.252 02:35:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.252 02:35:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.252 02:35:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.252 02:35:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.510 02:35:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:44.510 02:35:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:44.510 02:35:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.510 02:35:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.767 [2024-07-27 02:35:12.717660] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MoNptaynWa': No such file or directory 00:35:44.767 [2024-07-27 02:35:12.717698] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:44.767 [2024-07-27 02:35:12.717739] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:44.767 [2024-07-27 02:35:12.717759] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:44.767 [2024-07-27 02:35:12.717771] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:44.767 request: 00:35:44.767 { 00:35:44.767 "name": "nvme0", 00:35:44.767 "trtype": "tcp", 00:35:44.767 "traddr": "127.0.0.1", 00:35:44.767 "adrfam": "ipv4", 00:35:44.767 "trsvcid": "4420", 00:35:44.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.767 "prchk_reftag": false, 00:35:44.768 "prchk_guard": false, 00:35:44.768 "hdgst": false, 00:35:44.768 "ddgst": false, 00:35:44.768 "psk": "key0", 00:35:44.768 "method": "bdev_nvme_attach_controller", 00:35:44.768 "req_id": 1 00:35:44.768 } 00:35:44.768 Got JSON-RPC error response 00:35:44.768 response: 00:35:44.768 { 00:35:44.768 "code": -19, 00:35:44.768 "message": "No such device" 00:35:44.768 } 00:35:44.768 02:35:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:44.768 02:35:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:44.768 02:35:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:44.768 02:35:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:44.768 02:35:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:44.768 02:35:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:45.025 02:35:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ywr1rD33yb 00:35:45.025 02:35:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:45.025 02:35:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:45.025 02:35:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ywr1rD33yb 00:35:45.025 02:35:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ywr1rD33yb 00:35:45.025 02:35:13 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ywr1rD33yb 00:35:45.025 02:35:13 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ywr1rD33yb 00:35:45.025 02:35:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ywr1rD33yb 00:35:45.283 02:35:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.283 02:35:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.541 nvme0n1 00:35:45.541 02:35:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:45.541 02:35:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.541 02:35:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.541 02:35:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.541 02:35:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.541 02:35:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.798 02:35:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:45.798 02:35:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:45.798 02:35:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:46.055 02:35:14 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:46.055 02:35:14 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:46.055 02:35:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.055 02:35:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.055 02:35:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.312 02:35:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:46.312 02:35:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:46.312 02:35:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.312 02:35:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.312 02:35:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.312 02:35:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.312 02:35:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.570 02:35:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:46.570 02:35:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:46.570 02:35:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:46.827 02:35:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:46.827 02:35:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.827 02:35:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:47.085 02:35:15 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:47.085 02:35:15 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ywr1rD33yb 00:35:47.085 02:35:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ywr1rD33yb 00:35:47.342 02:35:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DKwezpmQrT 00:35:47.342 02:35:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DKwezpmQrT 00:35:47.600 02:35:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.600 02:35:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.857 nvme0n1 00:35:47.857 02:35:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:47.857 02:35:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:48.115 02:35:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:48.115 "subsystems": [ 00:35:48.115 { 00:35:48.115 "subsystem": "keyring", 00:35:48.115 "config": [ 00:35:48.115 { 00:35:48.115 "method": "keyring_file_add_key", 00:35:48.115 "params": { 00:35:48.115 "name": "key0", 00:35:48.115 "path": "/tmp/tmp.ywr1rD33yb" 00:35:48.115 } 00:35:48.115 }, 00:35:48.115 { 00:35:48.115 "method": "keyring_file_add_key", 00:35:48.115 "params": { 00:35:48.115 "name": "key1", 00:35:48.115 "path": "/tmp/tmp.DKwezpmQrT" 00:35:48.115 } 00:35:48.115 } 00:35:48.115 ] 00:35:48.115 }, 00:35:48.115 { 00:35:48.115 "subsystem": "iobuf", 00:35:48.115 "config": [ 00:35:48.115 { 00:35:48.115 "method": "iobuf_set_options", 00:35:48.115 "params": { 00:35:48.115 "small_pool_count": 8192, 00:35:48.115 "large_pool_count": 1024, 00:35:48.115 "small_bufsize": 8192, 00:35:48.115 "large_bufsize": 135168 00:35:48.115 } 00:35:48.115 } 00:35:48.115 ] 00:35:48.115 }, 00:35:48.115 { 00:35:48.115 "subsystem": "sock", 00:35:48.115 "config": [ 00:35:48.115 { 00:35:48.115 "method": "sock_set_default_impl", 00:35:48.115 "params": { 00:35:48.115 "impl_name": "posix" 00:35:48.115 } 00:35:48.115 }, 00:35:48.115 { 00:35:48.115 "method": "sock_impl_set_options", 00:35:48.115 "params": { 00:35:48.115 "impl_name": "ssl", 00:35:48.115 "recv_buf_size": 4096, 00:35:48.115 "send_buf_size": 4096, 00:35:48.115 "enable_recv_pipe": true, 00:35:48.115 "enable_quickack": false, 00:35:48.115 "enable_placement_id": 0, 00:35:48.115 "enable_zerocopy_send_server": true, 00:35:48.115 "enable_zerocopy_send_client": false, 00:35:48.115 "zerocopy_threshold": 0, 00:35:48.115 "tls_version": 0, 00:35:48.115 "enable_ktls": false 00:35:48.115 } 00:35:48.115 }, 00:35:48.115 { 00:35:48.115 "method": "sock_impl_set_options", 00:35:48.115 "params": { 00:35:48.115 "impl_name": "posix", 00:35:48.115 "recv_buf_size": 2097152, 00:35:48.115 "send_buf_size": 2097152, 00:35:48.115 "enable_recv_pipe": true, 00:35:48.115 "enable_quickack": false, 00:35:48.115 "enable_placement_id": 0, 00:35:48.116 "enable_zerocopy_send_server": true, 00:35:48.116 "enable_zerocopy_send_client": false, 00:35:48.116 "zerocopy_threshold": 0, 00:35:48.116 "tls_version": 0, 00:35:48.116 "enable_ktls": false 00:35:48.116 } 00:35:48.116 } 00:35:48.116 ] 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "subsystem": "vmd", 00:35:48.116 "config": [] 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "subsystem": "accel", 00:35:48.116 "config": [ 00:35:48.116 { 00:35:48.116 "method": "accel_set_options", 00:35:48.116 "params": { 00:35:48.116 "small_cache_size": 128, 00:35:48.116 "large_cache_size": 16, 00:35:48.116 "task_count": 2048, 00:35:48.116 "sequence_count": 2048, 00:35:48.116 "buf_count": 2048 00:35:48.116 } 00:35:48.116 } 00:35:48.116 ] 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "subsystem": "bdev", 00:35:48.116 "config": [ 00:35:48.116 { 00:35:48.116 "method": "bdev_set_options", 00:35:48.116 "params": { 00:35:48.116 "bdev_io_pool_size": 65535, 00:35:48.116 "bdev_io_cache_size": 256, 00:35:48.116 "bdev_auto_examine": true, 00:35:48.116 "iobuf_small_cache_size": 128, 00:35:48.116 "iobuf_large_cache_size": 16 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_raid_set_options", 00:35:48.116 "params": { 00:35:48.116 "process_window_size_kb": 1024, 00:35:48.116 "process_max_bandwidth_mb_sec": 0 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_iscsi_set_options", 00:35:48.116 "params": { 00:35:48.116 "timeout_sec": 30 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_nvme_set_options", 00:35:48.116 "params": { 00:35:48.116 "action_on_timeout": "none", 00:35:48.116 "timeout_us": 0, 00:35:48.116 "timeout_admin_us": 0, 00:35:48.116 "keep_alive_timeout_ms": 10000, 00:35:48.116 "arbitration_burst": 0, 00:35:48.116 "low_priority_weight": 0, 00:35:48.116 "medium_priority_weight": 0, 00:35:48.116 "high_priority_weight": 0, 00:35:48.116 "nvme_adminq_poll_period_us": 10000, 00:35:48.116 "nvme_ioq_poll_period_us": 0, 00:35:48.116 "io_queue_requests": 512, 00:35:48.116 "delay_cmd_submit": true, 00:35:48.116 "transport_retry_count": 4, 00:35:48.116 "bdev_retry_count": 3, 00:35:48.116 "transport_ack_timeout": 0, 00:35:48.116 "ctrlr_loss_timeout_sec": 0, 00:35:48.116 "reconnect_delay_sec": 0, 00:35:48.116 "fast_io_fail_timeout_sec": 0, 00:35:48.116 "disable_auto_failback": false, 00:35:48.116 "generate_uuids": false, 00:35:48.116 "transport_tos": 0, 00:35:48.116 "nvme_error_stat": false, 00:35:48.116 "rdma_srq_size": 0, 00:35:48.116 "io_path_stat": false, 00:35:48.116 "allow_accel_sequence": false, 00:35:48.116 "rdma_max_cq_size": 0, 00:35:48.116 "rdma_cm_event_timeout_ms": 0, 00:35:48.116 "dhchap_digests": [ 00:35:48.116 "sha256", 00:35:48.116 "sha384", 00:35:48.116 "sha512" 00:35:48.116 ], 00:35:48.116 "dhchap_dhgroups": [ 00:35:48.116 "null", 00:35:48.116 "ffdhe2048", 00:35:48.116 "ffdhe3072", 00:35:48.116 "ffdhe4096", 00:35:48.116 "ffdhe6144", 00:35:48.116 "ffdhe8192" 00:35:48.116 ] 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_nvme_attach_controller", 00:35:48.116 "params": { 00:35:48.116 "name": "nvme0", 00:35:48.116 "trtype": "TCP", 00:35:48.116 "adrfam": "IPv4", 00:35:48.116 "traddr": "127.0.0.1", 00:35:48.116 "trsvcid": "4420", 00:35:48.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.116 "prchk_reftag": false, 00:35:48.116 "prchk_guard": false, 00:35:48.116 "ctrlr_loss_timeout_sec": 0, 00:35:48.116 "reconnect_delay_sec": 0, 00:35:48.116 "fast_io_fail_timeout_sec": 0, 00:35:48.116 "psk": "key0", 00:35:48.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.116 "hdgst": false, 00:35:48.116 "ddgst": false 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_nvme_set_hotplug", 00:35:48.116 "params": { 00:35:48.116 "period_us": 100000, 00:35:48.116 "enable": false 00:35:48.116 } 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "method": "bdev_wait_for_examine" 00:35:48.116 } 00:35:48.116 ] 00:35:48.116 }, 00:35:48.116 { 00:35:48.116 "subsystem": "nbd", 00:35:48.116 "config": [] 00:35:48.116 } 00:35:48.116 ] 00:35:48.116 }' 00:35:48.116 02:35:16 keyring_file -- keyring/file.sh@114 -- # killprocess 1214620 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1214620 ']' 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1214620 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1214620 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1214620' 00:35:48.116 killing process with pid 1214620 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@969 -- # kill 1214620 00:35:48.116 Received shutdown signal, test time was about 1.000000 seconds 00:35:48.116 00:35:48.116 Latency(us) 00:35:48.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.116 =================================================================================================================== 00:35:48.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.116 02:35:16 keyring_file -- common/autotest_common.sh@974 -- # wait 1214620 00:35:48.375 02:35:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=1216048 00:35:48.375 02:35:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1216048 /var/tmp/bperf.sock 00:35:48.375 02:35:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1216048 ']' 00:35:48.375 02:35:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.375 02:35:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:48.375 02:35:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:48.375 02:35:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.375 02:35:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:48.375 "subsystems": [ 00:35:48.375 { 00:35:48.375 "subsystem": "keyring", 00:35:48.375 "config": [ 00:35:48.375 { 00:35:48.375 "method": "keyring_file_add_key", 00:35:48.375 "params": { 00:35:48.375 "name": "key0", 00:35:48.375 "path": "/tmp/tmp.ywr1rD33yb" 00:35:48.375 } 00:35:48.375 }, 00:35:48.375 { 00:35:48.375 "method": "keyring_file_add_key", 00:35:48.375 "params": { 00:35:48.375 "name": "key1", 00:35:48.375 "path": "/tmp/tmp.DKwezpmQrT" 00:35:48.375 } 00:35:48.375 } 00:35:48.375 ] 00:35:48.375 }, 00:35:48.375 { 00:35:48.375 "subsystem": "iobuf", 00:35:48.375 "config": [ 00:35:48.375 { 00:35:48.375 "method": "iobuf_set_options", 00:35:48.375 "params": { 00:35:48.375 "small_pool_count": 8192, 00:35:48.375 "large_pool_count": 1024, 00:35:48.375 "small_bufsize": 8192, 00:35:48.375 "large_bufsize": 135168 00:35:48.375 } 00:35:48.375 } 00:35:48.375 ] 00:35:48.375 }, 00:35:48.375 { 00:35:48.375 "subsystem": "sock", 00:35:48.375 "config": [ 00:35:48.375 { 00:35:48.375 "method": "sock_set_default_impl", 00:35:48.375 "params": { 00:35:48.375 "impl_name": "posix" 00:35:48.375 } 00:35:48.375 }, 00:35:48.375 { 00:35:48.376 "method": "sock_impl_set_options", 00:35:48.376 "params": { 00:35:48.376 "impl_name": "ssl", 00:35:48.376 "recv_buf_size": 4096, 00:35:48.376 "send_buf_size": 4096, 00:35:48.376 "enable_recv_pipe": true, 00:35:48.376 "enable_quickack": false, 00:35:48.376 "enable_placement_id": 0, 00:35:48.376 "enable_zerocopy_send_server": true, 00:35:48.376 "enable_zerocopy_send_client": false, 00:35:48.376 "zerocopy_threshold": 0, 00:35:48.376 "tls_version": 0, 00:35:48.376 "enable_ktls": false 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "sock_impl_set_options", 00:35:48.376 "params": { 00:35:48.376 "impl_name": "posix", 00:35:48.376 "recv_buf_size": 2097152, 00:35:48.376 "send_buf_size": 2097152, 00:35:48.376 "enable_recv_pipe": true, 00:35:48.376 "enable_quickack": false, 00:35:48.376 "enable_placement_id": 0, 00:35:48.376 "enable_zerocopy_send_server": true, 00:35:48.376 "enable_zerocopy_send_client": false, 00:35:48.376 "zerocopy_threshold": 0, 00:35:48.376 "tls_version": 0, 00:35:48.376 "enable_ktls": false 00:35:48.376 } 00:35:48.376 } 00:35:48.376 ] 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "subsystem": "vmd", 00:35:48.376 "config": [] 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "subsystem": "accel", 00:35:48.376 "config": [ 00:35:48.376 { 00:35:48.376 "method": "accel_set_options", 00:35:48.376 "params": { 00:35:48.376 "small_cache_size": 128, 00:35:48.376 "large_cache_size": 16, 00:35:48.376 "task_count": 2048, 00:35:48.376 "sequence_count": 2048, 00:35:48.376 "buf_count": 2048 00:35:48.376 } 00:35:48.376 } 00:35:48.376 ] 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "subsystem": "bdev", 00:35:48.376 "config": [ 00:35:48.376 { 00:35:48.376 "method": "bdev_set_options", 00:35:48.376 "params": { 00:35:48.376 "bdev_io_pool_size": 65535, 00:35:48.376 "bdev_io_cache_size": 256, 00:35:48.376 "bdev_auto_examine": true, 00:35:48.376 "iobuf_small_cache_size": 128, 00:35:48.376 "iobuf_large_cache_size": 16 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_raid_set_options", 00:35:48.376 "params": { 00:35:48.376 "process_window_size_kb": 1024, 00:35:48.376 "process_max_bandwidth_mb_sec": 0 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_iscsi_set_options", 00:35:48.376 "params": { 00:35:48.376 "timeout_sec": 30 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_nvme_set_options", 00:35:48.376 "params": { 00:35:48.376 "action_on_timeout": "none", 00:35:48.376 "timeout_us": 0, 00:35:48.376 "timeout_admin_us": 0, 00:35:48.376 "keep_alive_timeout_ms": 10000, 00:35:48.376 "arbitration_burst": 0, 00:35:48.376 "low_priority_weight": 0, 00:35:48.376 "medium_priority_weight": 0, 00:35:48.376 "high_priority_weight": 0, 00:35:48.376 "nvme_adminq_poll_period_us": 10000, 00:35:48.376 "nvme_ioq_poll_period_us": 0, 00:35:48.376 "io_queue_requests": 512, 00:35:48.376 "delay_cmd_submit": true, 00:35:48.376 "transport_retry_count": 4, 00:35:48.376 "bdev_retry_count": 3, 00:35:48.376 "transport_ack_timeout": 0, 00:35:48.376 "ctrlr_loss_timeout_sec": 0, 00:35:48.376 "reconnect_delay_sec": 0, 00:35:48.376 "fast_io_fail_timeout_sec": 0, 00:35:48.376 "disable_auto_failback": false, 00:35:48.376 "generate_uuids": false, 00:35:48.376 "transport_tos": 0, 00:35:48.376 "nvme_error_stat": false, 00:35:48.376 "rdma_srq_size": 0, 00:35:48.376 "io_path_stat": false, 00:35:48.376 "allow_accel_sequence": false, 00:35:48.376 "rdma_max_cq_size": 0, 00:35:48.376 "rdma_cm_event_timeout_ms": 0, 00:35:48.376 "dhchap_digests": [ 00:35:48.376 "sha256", 00:35:48.376 "sha384", 00:35:48.376 "sha512" 00:35:48.376 ], 00:35:48.376 "dhchap_dhgroups": [ 00:35:48.376 "null", 00:35:48.376 "ffdhe2048", 00:35:48.376 "ffdhe3072", 00:35:48.376 "ffdhe4096", 00:35:48.376 "ffdhe6144", 00:35:48.376 "ffdhe8192" 00:35:48.376 ] 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_nvme_attach_controller", 00:35:48.376 "params": { 00:35:48.376 "name": "nvme0", 00:35:48.376 "trtype": "TCP", 00:35:48.376 "adrfam": "IPv4", 00:35:48.376 "traddr": "127.0.0.1", 00:35:48.376 "trsvcid": "4420", 00:35:48.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.376 "prchk_reftag": false, 00:35:48.376 "prchk_guard": false, 00:35:48.376 "ctrlr_loss_timeout_sec": 0, 00:35:48.376 "reconnect_delay_sec": 0, 00:35:48.376 "fast_io_fail_timeout_sec": 0, 00:35:48.376 "psk": "key0", 00:35:48.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.376 "hdgst": false, 00:35:48.376 "ddgst": false 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_nvme_set_hotplug", 00:35:48.376 "params": { 00:35:48.376 "period_us": 100000, 00:35:48.376 "enable": false 00:35:48.376 } 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "method": "bdev_wait_for_examine" 00:35:48.376 } 00:35:48.376 ] 00:35:48.376 }, 00:35:48.376 { 00:35:48.376 "subsystem": "nbd", 00:35:48.376 "config": [] 00:35:48.376 } 00:35:48.376 ] 00:35:48.376 }' 00:35:48.376 02:35:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:48.376 02:35:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:48.376 [2024-07-27 02:35:16.504298] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:48.376 [2024-07-27 02:35:16.504373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216048 ] 00:35:48.376 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.376 [2024-07-27 02:35:16.534131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:48.635 [2024-07-27 02:35:16.565430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.635 [2024-07-27 02:35:16.655365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.894 [2024-07-27 02:35:16.838740] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:49.459 02:35:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:49.459 02:35:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:49.459 02:35:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:49.459 02:35:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.459 02:35:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:49.718 02:35:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:49.718 02:35:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:49.718 02:35:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.718 02:35:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.718 02:35:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.718 02:35:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.718 02:35:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.975 02:35:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:49.976 02:35:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:49.976 02:35:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.976 02:35:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.976 02:35:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.976 02:35:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.976 02:35:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.233 02:35:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:50.233 02:35:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:50.233 02:35:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:50.233 02:35:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:50.492 02:35:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:50.492 02:35:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:50.492 02:35:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ywr1rD33yb /tmp/tmp.DKwezpmQrT 00:35:50.492 02:35:18 keyring_file -- keyring/file.sh@20 -- # killprocess 1216048 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1216048 ']' 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1216048 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1216048 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1216048' 00:35:50.492 killing process with pid 1216048 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@969 -- # kill 1216048 00:35:50.492 Received shutdown signal, test time was about 1.000000 seconds 00:35:50.492 00:35:50.492 Latency(us) 00:35:50.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.492 =================================================================================================================== 00:35:50.492 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:50.492 02:35:18 keyring_file -- common/autotest_common.sh@974 -- # wait 1216048 00:35:50.750 02:35:18 keyring_file -- keyring/file.sh@21 -- # killprocess 1214601 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1214601 ']' 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1214601 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1214601 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1214601' 00:35:50.750 killing process with pid 1214601 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@969 -- # kill 1214601 00:35:50.750 [2024-07-27 02:35:18.739383] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:50.750 02:35:18 keyring_file -- common/autotest_common.sh@974 -- # wait 1214601 00:35:51.010 00:35:51.010 real 0m14.099s 00:35:51.010 user 0m34.814s 00:35:51.010 sys 0m3.287s 00:35:51.010 02:35:19 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:51.010 02:35:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:51.010 ************************************ 00:35:51.010 END TEST keyring_file 00:35:51.010 ************************************ 00:35:51.010 02:35:19 -- spdk/autotest.sh@302 -- # [[ y == y ]] 00:35:51.010 02:35:19 -- spdk/autotest.sh@303 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:51.010 02:35:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:51.010 02:35:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:51.010 02:35:19 -- common/autotest_common.sh@10 -- # set +x 00:35:51.010 ************************************ 00:35:51.010 START TEST keyring_linux 00:35:51.010 ************************************ 00:35:51.010 02:35:19 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:51.269 * Looking for test storage... 00:35:51.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:51.269 02:35:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:51.269 02:35:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.269 02:35:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.270 02:35:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.270 02:35:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.270 02:35:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.270 02:35:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.270 02:35:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.270 02:35:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.270 02:35:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:51.270 02:35:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:51.270 /tmp/:spdk-test:key0 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:51.270 02:35:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:51.270 02:35:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:51.270 /tmp/:spdk-test:key1 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1216434 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:51.270 02:35:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1216434 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1216434 ']' 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.270 02:35:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.270 [2024-07-27 02:35:19.332371] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:51.270 [2024-07-27 02:35:19.332449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216434 ] 00:35:51.270 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.270 [2024-07-27 02:35:19.366805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:51.270 [2024-07-27 02:35:19.393981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.528 [2024-07-27 02:35:19.479753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.787 [2024-07-27 02:35:19.714757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.787 null0 00:35:51.787 [2024-07-27 02:35:19.746794] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:51.787 [2024-07-27 02:35:19.747292] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:51.787 737189079 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:51.787 892673758 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1216440 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:51.787 02:35:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1216440 /var/tmp/bperf.sock 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1216440 ']' 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.787 02:35:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.787 [2024-07-27 02:35:19.813206] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 24.07.0-rc3 initialization... 00:35:51.787 [2024-07-27 02:35:19.813279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216440 ] 00:35:51.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.787 [2024-07-27 02:35:19.848478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:51.787 [2024-07-27 02:35:19.879523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.046 [2024-07-27 02:35:19.972275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.046 02:35:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.046 02:35:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:52.046 02:35:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:52.046 02:35:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:52.304 02:35:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:52.304 02:35:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:52.562 02:35:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:52.563 02:35:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:52.820 [2024-07-27 02:35:20.853669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:52.820 nvme0n1 00:35:52.820 02:35:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:52.820 02:35:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:52.820 02:35:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:52.820 02:35:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:52.820 02:35:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:52.820 02:35:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.078 02:35:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:53.078 02:35:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:53.078 02:35:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:53.078 02:35:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:53.078 02:35:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:53.078 02:35:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.078 02:35:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@25 -- # sn=737189079 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 737189079 == \7\3\7\1\8\9\0\7\9 ]] 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 737189079 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:53.336 02:35:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.594 Running I/O for 1 seconds... 00:35:54.526 00:35:54.526 Latency(us) 00:35:54.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.526 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:54.526 nvme0n1 : 1.03 3759.82 14.69 0.00 0.00 33621.65 9757.58 42913.94 00:35:54.526 =================================================================================================================== 00:35:54.526 Total : 3759.82 14.69 0.00 0.00 33621.65 9757.58 42913.94 00:35:54.526 0 00:35:54.526 02:35:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:54.526 02:35:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:54.783 02:35:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:54.783 02:35:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:54.783 02:35:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:54.783 02:35:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:54.783 02:35:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.783 02:35:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:55.041 02:35:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:55.041 02:35:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:55.041 02:35:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:55.041 02:35:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:55.041 02:35:23 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.041 02:35:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:55.299 [2024-07-27 02:35:23.358582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:55.299 [2024-07-27 02:35:23.358774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83a00 (107): Transport endpoint is not connected 00:35:55.299 [2024-07-27 02:35:23.359765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83a00 (9): Bad file descriptor 00:35:55.299 [2024-07-27 02:35:23.360763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:55.299 [2024-07-27 02:35:23.360785] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:55.299 [2024-07-27 02:35:23.360801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:55.299 request: 00:35:55.299 { 00:35:55.299 "name": "nvme0", 00:35:55.299 "trtype": "tcp", 00:35:55.299 "traddr": "127.0.0.1", 00:35:55.299 "adrfam": "ipv4", 00:35:55.299 "trsvcid": "4420", 00:35:55.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.299 "prchk_reftag": false, 00:35:55.299 "prchk_guard": false, 00:35:55.299 "hdgst": false, 00:35:55.299 "ddgst": false, 00:35:55.299 "psk": ":spdk-test:key1", 00:35:55.299 "method": "bdev_nvme_attach_controller", 00:35:55.299 "req_id": 1 00:35:55.300 } 00:35:55.300 Got JSON-RPC error response 00:35:55.300 response: 00:35:55.300 { 00:35:55.300 "code": -5, 00:35:55.300 "message": "Input/output error" 00:35:55.300 } 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@33 -- # sn=737189079 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 737189079 00:35:55.300 1 links removed 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@33 -- # sn=892673758 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 892673758 00:35:55.300 1 links removed 00:35:55.300 02:35:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1216440 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1216440 ']' 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1216440 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1216440 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1216440' 00:35:55.300 killing process with pid 1216440 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@969 -- # kill 1216440 00:35:55.300 Received shutdown signal, test time was about 1.000000 seconds 00:35:55.300 00:35:55.300 Latency(us) 00:35:55.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.300 =================================================================================================================== 00:35:55.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:55.300 02:35:23 keyring_linux -- common/autotest_common.sh@974 -- # wait 1216440 00:35:55.560 02:35:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1216434 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1216434 ']' 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1216434 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1216434 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1216434' 00:35:55.560 killing process with pid 1216434 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@969 -- # kill 1216434 00:35:55.560 02:35:23 keyring_linux -- common/autotest_common.sh@974 -- # wait 1216434 00:35:56.124 00:35:56.124 real 0m4.865s 00:35:56.124 user 0m9.240s 00:35:56.124 sys 0m1.410s 00:35:56.124 02:35:24 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:56.124 02:35:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.124 ************************************ 00:35:56.124 END TEST keyring_linux 00:35:56.124 ************************************ 00:35:56.124 02:35:24 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@322 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@362 -- # '[' 0 -eq 1 ']' 00:35:56.124 02:35:24 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:35:56.124 02:35:24 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:35:56.124 02:35:24 -- spdk/autotest.sh@377 -- # [[ 0 -eq 1 ]] 00:35:56.124 02:35:24 -- spdk/autotest.sh@382 -- # trap - SIGINT SIGTERM EXIT 00:35:56.124 02:35:24 -- spdk/autotest.sh@384 -- # timing_enter post_cleanup 00:35:56.124 02:35:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:56.124 02:35:24 -- common/autotest_common.sh@10 -- # set +x 00:35:56.124 02:35:24 -- spdk/autotest.sh@385 -- # autotest_cleanup 00:35:56.124 02:35:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:56.124 02:35:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:56.124 02:35:24 -- common/autotest_common.sh@10 -- # set +x 00:35:58.022 INFO: APP EXITING 00:35:58.022 INFO: killing all VMs 00:35:58.022 INFO: killing vhost app 00:35:58.022 INFO: EXIT DONE 00:35:58.957 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:35:58.957 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:58.957 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:58.957 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:58.957 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:58.957 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:58.957 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:58.957 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:58.957 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:58.957 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:58.957 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:58.957 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:58.957 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:58.957 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:58.957 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:58.957 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:58.957 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:00.333 Cleaning 00:36:00.333 Removing: /var/run/dpdk/spdk0/config 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:00.333 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:00.333 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:00.333 Removing: /var/run/dpdk/spdk1/config 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:00.333 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:00.333 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:00.333 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:00.333 Removing: /var/run/dpdk/spdk2/config 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:00.333 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:00.333 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:00.333 Removing: /var/run/dpdk/spdk3/config 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:00.333 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:00.333 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:00.333 Removing: /var/run/dpdk/spdk4/config 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:00.333 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:00.333 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:00.333 Removing: /dev/shm/bdev_svc_trace.1 00:36:00.333 Removing: /dev/shm/nvmf_trace.0 00:36:00.333 Removing: /dev/shm/spdk_tgt_trace.pid898163 00:36:00.333 Removing: /var/run/dpdk/spdk0 00:36:00.333 Removing: /var/run/dpdk/spdk1 00:36:00.333 Removing: /var/run/dpdk/spdk2 00:36:00.333 Removing: /var/run/dpdk/spdk3 00:36:00.333 Removing: /var/run/dpdk/spdk4 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1012368 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1016046 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1019988 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1023828 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1023830 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1024484 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1025019 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1025673 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1026078 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1026080 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1026341 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1026349 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1026444 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1027013 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1027664 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1028307 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1028670 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1028727 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1028870 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1029746 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1030466 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1035795 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1061074 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1063859 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1065176 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1066961 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1067121 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1067257 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1067393 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1067827 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1069064 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1069746 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1070075 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1071670 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1072090 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1072649 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1075044 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1078632 00:36:00.333 Removing: /var/run/dpdk/spdk_pid1082055 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1105702 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1108460 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1112232 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1113177 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1114265 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1116832 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1119066 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1123264 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1123268 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1126048 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1126195 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1126376 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1126712 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1126717 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1127901 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1129588 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1130782 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1131963 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1133152 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1134429 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1138111 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1138519 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1139836 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1140576 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1144249 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1146133 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1149539 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1152987 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1159821 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1164031 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1164037 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1178307 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1178712 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1179118 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1179639 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1180115 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1180625 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1181036 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1181446 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1183934 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1184081 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1187878 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1187936 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1189542 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1195179 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1195191 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1198019 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1199352 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1200757 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1201615 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1203018 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1203894 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1209202 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1209549 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1209943 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1211492 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1211835 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1212172 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1214601 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1214620 00:36:00.334 Removing: /var/run/dpdk/spdk_pid1216048 00:36:00.593 Removing: /var/run/dpdk/spdk_pid1216434 00:36:00.593 Removing: /var/run/dpdk/spdk_pid1216440 00:36:00.593 Removing: /var/run/dpdk/spdk_pid896612 00:36:00.593 Removing: /var/run/dpdk/spdk_pid897347 00:36:00.593 Removing: /var/run/dpdk/spdk_pid898163 00:36:00.593 Removing: /var/run/dpdk/spdk_pid898602 00:36:00.593 Removing: /var/run/dpdk/spdk_pid899288 00:36:00.593 Removing: /var/run/dpdk/spdk_pid899432 00:36:00.593 Removing: /var/run/dpdk/spdk_pid900142 00:36:00.593 Removing: /var/run/dpdk/spdk_pid900159 00:36:00.593 Removing: /var/run/dpdk/spdk_pid900403 00:36:00.593 Removing: /var/run/dpdk/spdk_pid901606 00:36:00.593 Removing: /var/run/dpdk/spdk_pid902632 00:36:00.593 Removing: /var/run/dpdk/spdk_pid902834 00:36:00.593 Removing: /var/run/dpdk/spdk_pid903121 00:36:00.593 Removing: /var/run/dpdk/spdk_pid903334 00:36:00.593 Removing: /var/run/dpdk/spdk_pid903524 00:36:00.593 Removing: /var/run/dpdk/spdk_pid903680 00:36:00.593 Removing: /var/run/dpdk/spdk_pid903839 00:36:00.593 Removing: /var/run/dpdk/spdk_pid904021 00:36:00.593 Removing: /var/run/dpdk/spdk_pid904324 00:36:00.593 Removing: /var/run/dpdk/spdk_pid906680 00:36:00.593 Removing: /var/run/dpdk/spdk_pid906844 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907006 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907017 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907325 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907453 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907760 00:36:00.593 Removing: /var/run/dpdk/spdk_pid907884 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908055 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908067 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908290 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908359 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908730 00:36:00.593 Removing: /var/run/dpdk/spdk_pid908882 00:36:00.593 Removing: /var/run/dpdk/spdk_pid909139 00:36:00.593 Removing: /var/run/dpdk/spdk_pid911155 00:36:00.593 Removing: /var/run/dpdk/spdk_pid913765 00:36:00.593 Removing: /var/run/dpdk/spdk_pid921119 00:36:00.593 Removing: /var/run/dpdk/spdk_pid921642 00:36:00.593 Removing: /var/run/dpdk/spdk_pid924034 00:36:00.593 Removing: /var/run/dpdk/spdk_pid924309 00:36:00.593 Removing: /var/run/dpdk/spdk_pid926822 00:36:00.593 Removing: /var/run/dpdk/spdk_pid930535 00:36:00.593 Removing: /var/run/dpdk/spdk_pid932613 00:36:00.593 Removing: /var/run/dpdk/spdk_pid938883 00:36:00.593 Removing: /var/run/dpdk/spdk_pid944087 00:36:00.593 Removing: /var/run/dpdk/spdk_pid945400 00:36:00.593 Removing: /var/run/dpdk/spdk_pid946077 00:36:00.593 Removing: /var/run/dpdk/spdk_pid956920 00:36:00.593 Removing: /var/run/dpdk/spdk_pid959209 00:36:00.593 Clean 00:36:00.593 02:35:28 -- common/autotest_common.sh@1451 -- # return 0 00:36:00.593 02:35:28 -- spdk/autotest.sh@386 -- # timing_exit post_cleanup 00:36:00.593 02:35:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:00.593 02:35:28 -- common/autotest_common.sh@10 -- # set +x 00:36:00.593 02:35:28 -- spdk/autotest.sh@388 -- # timing_exit autotest 00:36:00.593 02:35:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:00.593 02:35:28 -- common/autotest_common.sh@10 -- # set +x 00:36:00.593 02:35:28 -- spdk/autotest.sh@389 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:00.593 02:35:28 -- spdk/autotest.sh@391 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:00.593 02:35:28 -- spdk/autotest.sh@391 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:00.593 02:35:28 -- spdk/autotest.sh@393 -- # hash lcov 00:36:00.593 02:35:28 -- spdk/autotest.sh@393 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:00.593 02:35:28 -- spdk/autotest.sh@395 -- # hostname 00:36:00.593 02:35:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:00.851 geninfo: WARNING: invalid characters removed from testname! 00:36:32.917 02:35:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:32.917 02:36:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.191 02:36:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.714 02:36:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.893 02:36:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.199 02:36:13 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:50.382 02:36:17 -- spdk/autotest.sh@402 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:50.383 02:36:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.383 02:36:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:50.383 02:36:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.383 02:36:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.383 02:36:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.383 02:36:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.383 02:36:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.383 02:36:17 -- paths/export.sh@5 -- $ export PATH 00:36:50.383 02:36:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.383 02:36:17 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:50.383 02:36:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:50.383 02:36:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722040577.XXXXXX 00:36:50.383 02:36:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722040577.lhq6F5 00:36:50.383 02:36:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:50.383 02:36:17 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:36:50.383 02:36:17 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:50.383 02:36:17 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:50.383 02:36:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:50.383 02:36:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:50.383 02:36:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:50.383 02:36:17 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:50.383 02:36:17 -- common/autotest_common.sh@10 -- $ set +x 00:36:50.383 02:36:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:50.383 02:36:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:50.383 02:36:17 -- pm/common@17 -- $ local monitor 00:36:50.383 02:36:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.383 02:36:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.383 02:36:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.383 02:36:17 -- pm/common@21 -- $ date +%s 00:36:50.383 02:36:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.383 02:36:17 -- pm/common@21 -- $ date +%s 00:36:50.383 02:36:17 -- pm/common@25 -- $ sleep 1 00:36:50.383 02:36:17 -- pm/common@21 -- $ date +%s 00:36:50.383 02:36:17 -- pm/common@21 -- $ date +%s 00:36:50.383 02:36:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722040577 00:36:50.383 02:36:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722040577 00:36:50.383 02:36:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722040577 00:36:50.383 02:36:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722040577 00:36:50.383 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722040577_collect-vmstat.pm.log 00:36:50.383 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722040577_collect-cpu-load.pm.log 00:36:50.383 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722040577_collect-cpu-temp.pm.log 00:36:50.383 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722040577_collect-bmc-pm.bmc.pm.log 00:36:50.641 02:36:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:50.641 02:36:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:50.641 02:36:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:50.641 02:36:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:50.641 02:36:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:50.641 02:36:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:50.641 02:36:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:50.641 02:36:18 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:50.641 02:36:18 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:50.641 02:36:18 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:50.900 02:36:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:50.900 02:36:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:50.900 02:36:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:50.900 02:36:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:50.900 02:36:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.900 02:36:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:50.900 02:36:18 -- pm/common@44 -- $ pid=1228166 00:36:50.900 02:36:18 -- pm/common@50 -- $ kill -TERM 1228166 00:36:50.900 02:36:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.900 02:36:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:50.900 02:36:18 -- pm/common@44 -- $ pid=1228168 00:36:50.900 02:36:18 -- pm/common@50 -- $ kill -TERM 1228168 00:36:50.900 02:36:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.900 02:36:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:50.900 02:36:18 -- pm/common@44 -- $ pid=1228170 00:36:50.900 02:36:18 -- pm/common@50 -- $ kill -TERM 1228170 00:36:50.900 02:36:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:50.900 02:36:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:50.900 02:36:18 -- pm/common@44 -- $ pid=1228201 00:36:50.900 02:36:18 -- pm/common@50 -- $ sudo -E kill -TERM 1228201 00:36:50.900 + [[ -n 796949 ]] 00:36:50.900 + sudo kill 796949 00:36:50.911 [Pipeline] } 00:36:50.928 [Pipeline] // stage 00:36:50.933 [Pipeline] } 00:36:50.950 [Pipeline] // timeout 00:36:50.955 [Pipeline] } 00:36:50.971 [Pipeline] // catchError 00:36:50.976 [Pipeline] } 00:36:50.993 [Pipeline] // wrap 00:36:50.999 [Pipeline] } 00:36:51.013 [Pipeline] // catchError 00:36:51.023 [Pipeline] stage 00:36:51.025 [Pipeline] { (Epilogue) 00:36:51.039 [Pipeline] catchError 00:36:51.041 [Pipeline] { 00:36:51.055 [Pipeline] echo 00:36:51.057 Cleanup processes 00:36:51.063 [Pipeline] sh 00:36:51.350 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:51.350 1228312 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:51.350 1228431 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:51.364 [Pipeline] sh 00:36:51.645 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:51.645 ++ grep -v 'sudo pgrep' 00:36:51.645 ++ awk '{print $1}' 00:36:51.645 + sudo kill -9 1228312 00:36:51.657 [Pipeline] sh 00:36:51.938 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:04.150 [Pipeline] sh 00:37:04.437 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:04.437 Artifacts sizes are good 00:37:04.453 [Pipeline] archiveArtifacts 00:37:04.460 Archiving artifacts 00:37:04.730 [Pipeline] sh 00:37:05.016 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:05.036 [Pipeline] cleanWs 00:37:05.047 [WS-CLEANUP] Deleting project workspace... 00:37:05.047 [WS-CLEANUP] Deferred wipeout is used... 00:37:05.054 [WS-CLEANUP] done 00:37:05.056 [Pipeline] } 00:37:05.076 [Pipeline] // catchError 00:37:05.089 [Pipeline] sh 00:37:05.369 + logger -p user.info -t JENKINS-CI 00:37:05.377 [Pipeline] } 00:37:05.392 [Pipeline] // stage 00:37:05.397 [Pipeline] } 00:37:05.412 [Pipeline] // node 00:37:05.417 [Pipeline] End of Pipeline 00:37:05.478 Finished: SUCCESS